Lecture Notes: Introduction to IDF and ATC

Size: px
Start display at page:

Download "Lecture Notes: Introduction to IDF and ATC"

Transcription

1 Lecture Notes: Introduction to IDF and ATC James T. Allison April 5, 2006 This lecture is an introductory tutorial on the mechanics of implementing two methods for optimal system design: the individual disciplinary feasible approach (IDF), and analytical target cascading (ATC). Familiarity with optimization is assumed. A brief discussion on terminology is given, followed by an example-based introduction to both IDF and ATC. Additional references on IDF [1] and ATC [2, 3, 4] may be accessed from the publications page of the site for those interested in more detailed information. 1 Terminology A system is composed of interacting components, and from within the optimal system design paradigm there exist two types of component interconnections: shared variables (x s ), and coupling variables (y). If it were not for these connections, the subproblems could be solved completely independently. Methods such as those discussed in this tutorial provide a means for handling connections. Shared variables are quantities that are design variables for more than one subproblem. If a quantity is only a design variable for one subproblem, then it is called a local variable (x l ). A coupling variable is a quantity calculated by a subproblem analysis that is a required input parameter to at least one other subproblem analysis. If we label the subproblems 1, 2,..., N, where the total number of subsystems is N, then we say that y ij is the coupling variable that is an output of subproblem j and and input to subproblem i. Coupling variables may be scalar, vector, or function valued. The variables local to subproblem i are x li, and its shared variables are x si. The collection of all design variables for subproblem i is x i. Figure 1 graphically illustrates a partitioned system analysis using the quantities just described, and also additional analysis outputs g i and h i. For convenience in notation, vectors are assumed to be row vectors. These additional outputs are constraint or objective functions required for the optimization subproblem, but not for system analysis (as are the coupling variables). Note that not all of the depicted N(N 1) couplings necessarily exist. Also observe that some couplings may exist in reality, but approximate models used for optimization may not account for them. Copyright c 2006 by James T. Allison 1

2 2 SYSTEM ANALYSIS AND DESIGN g 1, h 1 g 2, h 2 f N, g N, h N x 1 = [x l1 x s1 ] x 2 = [x l2 x s2 ] x N = [x ln x sn ]... Analysis 1 Analysis 2 Analysis N y 21 y 12 y j1 y 1j y Nj y jn Figure 1: General representation of a partitioned system analysis 2 System Analysis and Design It is assumed here that several subsystem optimization problems have been defined, an example of which is shown in equation 1 for subproblem i. Each subproblem has its own objective, local constraints, and design variables. The objective and constraint functions require as input parameters the input coupling variables from the rest of the system, i.e., subproblem i has as input coupling variables y ij where j takes on all values from 1 to N except i. If the set of all subproblem labels is P = {1, 2,..., N}, we can write this requirement on j as j P\i. min x i f i (x i, y ij ) (1) subject to g i (x i, y ij ) 0 where h i (x i, y ij ) = 0 j P\i With the subproblems defined, the task now is to formulate a system optimization problem that accounts for all subproblems, their interactions (coupling variables), and connections via shared variables. Equation 2 illustrates the complete system optimization formulation. Since it was assumed each subproblem has its own local objective function, the system design problem entails multiobjective optimization. The formulation in equation 2 handles the multiobjective optimization by scalarizing the objectives with a weighted sum, where w i indicates the importance of each objective function. Another approach is to select one objective as the overall system objective, and convert the other local objectives to inequality constraints. The system optimization is performed with respect to all design variables, shared and coupled, for all subproblems. The vector of all local variables for all subproblems is x l, and the set of all shared variables for all subproblems is x s. Copyright c 2006 by James T. Allison 2

3 2 SYSTEM ANALYSIS AND DESIGN min x=[x l,x s] N w i f i (x) (2) i=1 subject to g(x) = [g 1, g 2,..., g N ] 0 h(x) = [h 1, h 2,..., h N ] = 0 Implicit in the above formulation is the task of system analysis, which involves finding the values for coupling variables that result in a consistent system. To illustrate, consider the simple coupled analysis in Figure 2. Each analysis depends on the other, and we say that the system has feedback coupling. For example, if we want to calculate the value of y 12, we need as an input parameter the value of y 21. However, in order to calculate the value of y 21, we need as an input parameter the value of y 12. A solution to this chicken-and-egg dilemma is to make a guess for one of the values, iteratively solve the analyses with updated coupling variables, and end when the system has reached a stability, or fixed point. This simple iterative algorithm is referred to as fixed point iteration [5]. This process will converge in many cases, but it has been shown that some systems will not converge when this process is used. Systems with information flow in only one direction are much easier to handle, since system analysis may be completed with only one pass through the subproblems. y 21 y 21 (y 12 ) = y 21 y 12 (y 21 ) = y 12 y 12 Figure 2: Two element coupled system System Optimizer x f, g, h System Analysis Figure 3: AiO system optimization process Copyright c 2006 by James T. Allison 3

4 3 DESIGN EXAMPLE: AIR FLOW SENSOR CALIBRATION We can solve the system optimization problem defined in equation 2 by completely solving the system analysis to obtain values for f, g, and h for every iteration of the optimization algorithm. In other words, we nest the system analysis task within the system optimization task. This approach is called the all-in-one (AiO) approach, and is sometimes referred to as nested analysis and design (NAND). The AiO approach is depicted graphically in Figure 3. Although commonly used and straightforward to implement, several problems may be encountered with AiO. If a system is strongly coupled, then the system analysis task may require numerous costly analysis iterations, resulting in a very long computation time. In addition, fixed point iteration may fail to converge. Finally, the dimension of the optimization problem may be too large to solve. The rest of this tutorial will demonstrate how the IDF and ATC methods help address these issues using a simple design example. 3 Design Example: Air Flow Sensor Calibration An air flow sensor design problem is considered that incorporates two coupled disciplinary analyses: structural and aerodynamic. The sensor consists of a vertical plate with length l and width w, attached to its base with a revolute joint and biased to the vertical position with a torsional spring of stiffness k (Figure 3). The plate is subject to horizontal air flow of speed v that results in a drag force F. The design objective is to choose l and w such that the plate deflects an amount θ (for a fixed air speed) that closely matches a target deflection value ˆθ. The plate area l w is constrained to a fixed value A, and the drag force on the plate must not exceed F max. This task, summarized in Eq. (3), is in essence a sensor calibration problem. v F 1 2 l cos θ k θ l Figure 4: Airflow sensor diagram min l,w subject to F F max 0 (θ ˆθ) 2 (3) lw A = 0 The structural analysis computes the plate deflection θ for a given sensor design and drag force. Note that the governing equation is transcendental, requiring iterative solution for θ. Copyright c 2006 by James T. Allison 4

5 4 INTRODUCTION TO IDF kθ = 1 F l cos θ (4) 2 The aerodynamic analysis computes the drag force on the plate F for a given sensor design and plate deflection (Eq. (5)). C is a constant that incorporates air density and the drag coefficient (C = 1ρC 2 D), and A f is the plate frontal area (A f = lw cos θ). F = CA f v 2 = Clw cos θv 2 (5) Note that the analyses depend on each other Figure 3 illustrates this relationship. The coupling variables are θ and F, l is a shared variable, and w is a local variable. Fixed point iteration can be used to find the equilibrium values of F and θ for a given design (l, w). Structural Analysis l θ F Aerodynamic Analysis l, w Figure 5: Coupling relationship in airflow sensor analysis The optimal solution to this problem may be found using monotonicity principles [6], and can be used to benchmark computational results. The analytical solution is given in Eq. (6). For parameter values k = N/rad, v = 40.0 m/s, C = 1.00 kg/m 3, F max = 7.00 N, and ˆθ = rad, the optimal design is [l, w ] = [0.365, ]. Note that the drag coefficient of a finite flat plate is approximately 2.0, resulting in a value of C = 1.00 if we assume air density to be 1.00 kg/m 3. l = 2k ( cos 1 F max ) CAv CAv 2 2, w = A/l (6) F 2 max The AiO solution entails solving Eq. (3), where each optimization iteration requires a fixed point iteration solution to Eqs. (4) and (5) to obtain values for F and θ. This process is depicted by Figure 3, where x = [w, l], f = (θ ˆθ) 2, g = F F max, and h = lw A. The AiO result matches the monotonicity solution. The following section introduces the IDF method, and demonstrates how to solve the airflow sensor problem with IDF. 4 Introduction to IDF The key concept in the individual disciplinary feasible (IDF) method is to use the optimization algorithm to perform the system analysis. System analysis involves finding the coupling variable values that provide a consistent system. We can express system consistency as a set of equations. Copyright c 2006 by James T. Allison 5

6 4 INTRODUCTION TO IDF For example, in the system depicted in Figure 2, the value of y 12 used to compute y 21 must match the value of y 12 computed using y 21. In equation form, this requirement is: y 12 (y 21 ) = y 12 (7) y 21 (y 12 ) = y 21 With the consistency requirement in equation form, we simply add these equations to the optimization problem formulation as equality constraints, and let the optimization algorithm decide the coupling variable values in addition to design variable values. Thus, in addition to performing system design, the optimizer performs system analysis. Rather than nested analysis and design, we are performing simultaneous analysis and design (SAND). The IDF architecture, for a two element system, is illustrated in Fig. 6. The optimizer chooses values for design and coupling variables and passes them to each of the subproblem analyses, which return values for the corresponding objective and constraint functions, as well as computed coupling variable values. At intermediate stages of the process, the system may not be consistent, but the additional equality constraints require system consistency at convergence of the optimization algorithm. Since the system optimizer provides all inputs required for all subsystems concurrently, analyses may be run in parallel, providing an efficiency advantage over AiO. Even if parallel computing is not used, IDF breaks any iterative analysis loops required for system analysis, which may result in reduced computation time. System Optimizer g 1, h 1, y i1 f, g 2, h 2, y i2 x l1, x s1, y 1j SS1 Analysis x l2, x s2, y 2j SS2 Analysis Figure 6: IDF architecture The IDF formulation is given in Eq. (8). Here it is assumed that one of the objective function values was chosen as the system objective function f(x, y), and the remaining objective functions were added to the set of inequality constraints. It is similar to the AiO formulation, except that the decision variable set includes both the design variables x and the coupling Copyright c 2006 by James T. Allison 6

7 5 INTRODUCTION TO ATC variables y, and that the auxiliary constraints (written in vector form) h aux (x, y) are added to ensure system consistency. min x=[x l,x s],y f(x, y) (8) subject to g(x, y) = [g 1, g 2,..., g N ] 0 h(x, y) = [h 1, h 2,..., h N ] = 0 h aux (x, y) = y(x, y) y = 0 To solidify the concepts behind the IDF method, we will show how to solve the air flow sensor problem using IDF. We add the coupling variables, θ and F, to the decision variable set, and add auxiliary constraints that require the consistency between the chosen and computed values of θ and F. The resulting formulation is shown in equation 9. The solution found using the IDF formulation and the parameter values given in the last section matches the monotonicity solution and the AiO solution. min l,w,θ,f subject to F F max 0 (θ ˆθ) 2 (9) lw A = 0 θ θ(l, F ) = 0 F F (l, w, θ) = 0 IDF is helpful for solving system design problems with feedback coupling or in cases where parallel computation is desired. One drawback of IDF, however, is that it cannot reduce the dimension of optimization problems, and large dimension may be a motivating factor in choosing to use a decomposition based technique. This motivates the development of methods, such as ATC, that can reduce the dimension of optimization problems when subproblems have many local variables, but relatively few shared or coupling variables. 5 Introduction to ATC Analytical target cascading (ATC) is a system optimization method that uses an optimization algorithm for each subproblem, rather than a single optimization algorithm for the entire system as with IDF. A simplified example adapted from a current student project will be used to explain the basic concept of ATC, and then it will be shown how to solve the air flow sensor problem using ATC. Consider a suspension design problem that has design variables such as linkage geometry, spring rate, and damper rate. A suspension designer might perform this design seeking to optimize ride quality and handling, within system-level constraints, and then talk to the spring and damper designers to see if it is possible to design components with the desired spring and damping rates. The component designers then seek to design components as close as possible to the desired targets, within component level constraints such as geometric, fatigue, and thermal constraints. If the targets are attainable, then the process stops and the optimal system design Copyright c 2006 by James T. Allison 7

8 5 INTRODUCTION TO ATC is obtained. If not, then we iterate between the top level and bottom level designs until the desired targets can be matched by the bottom level designers. If the targets cannot be matched, then the system design problem as posed is infeasible. Vehicle Suspension Design desired spring rate attainable spring rate desired damper rate attainable damper rate Suspension Spring Design Suspension Damper Design Figure 7: Conceptual ATC process ATC provides a formal and efficient procedure for this iterative task. Observe that this iterative process is very different from fixed point iteration, which only involves analysis. Each ATC subproblem is by itself a design optimization problem. The iterative process between top and bottom levels coordinates the subproblems toward a consistent and optimal solution. It is possible to extend ATC to multiple levels, but this discussion will be limited to the two-level case. To formulate an ATC problem, observe what quantities connect the subproblems. These may be either coupling variables or shared variables. We make copies of shared quantities in the appropriate subproblems in order to temporarily decouple the subproblems. These copies become decision variables, so like IDF, ATC also uses the optimization algorithm to perform system analysis tasks. A penalty is placed on deviations between shared quantities in order to ensure system consistency. Referring to the system in Figure 3, the shared quantities in the air flow sensor problem are l (a shared variable), and θ and F (coupling variables). It is convenient to choose the structural subproblem as the top level problem, since it computes the system objective function f = (θ ˆθ) 2. The top level problem ATC formulation is given in equation 10. min f + (l (1) l (2) ) 2 + (F (1) F (2) ) 2 + (θ (1) θ (2) ) 2 (10) l (1),F (1) subject to F (1) F max 0 Copyright c 2006 by James T. Allison 8

9 5 INTRODUCTION TO ATC The superscripts in parentheses indicate what level various quantities are evaluated at when there is ambiguity (1 refers to top level, 2 refers to bottom level). Any value with a superscript of 2 in equation 10 is determined at the bottom level problem, and so is a fixed parameter at the top level problem. The decision variables are l (1) and F (1), and θ (1) is an analysis output of the top level problem. F (2) is the actual drag force as computed by the bottom level problem. The inequality design constraint is placed with the top level problem, but could easily been included with the bottom level problem instead. The terms following f in the objective function are penalty terms that make deviation between shared quantities at top and bottom levels undesirable. Penalty functions are an approach to handle equality constraints, and can be helpful when equality constraints are difficult to satisfy [7]. In ATC we use penalty terms to enforce the system analysis equations, such as F (1) = F (2). The penalty terms used in this basic formulation are simple quadratic penalty terms. More sophisticated penalty methods have been applied to ATC that improve the efficiency of the process [3]. The bottom level, or aerodynamic problem, is given in equation 11. min (l (1) l (2) ) 2 + (F (1) F (2) ) 2 + (θ (1) θ (2) ) 2 (11) l (2),w (2),θ (2) subject to l (2) w (2) A = 0 The only objective is to minimize the deviation between shared quantities. All quantities with the superscript 1 are fixed parameters that are passed down from the top level problem. The decision variables are l (2), w (2), and θ (2). To solve the ATC problem, the top level problem is completely solved using initial guesses for input parameters required from the bottom level problem, and then the resulting values for l (1), F (1), and θ (1) are provided as fixed input parameters to the bottom level problem. The bottom level problem is then solved, and updated values for l (2), F (2), and θ (2) are passed back to the top level. This process is repeated until either the deviation between shared quantities is zero, or until these values stop changing. Copyright c 2006 by James T. Allison 9

10 REFERENCES REFERENCES References [1] J.T. Allison, M. Kokkolaras, and P.Y. Papalambros. On the impact of coupling strength on complex system optimization for single-level formulations. In ASME Design Engineering Technical Conference DETC , Long Beach CA, September [2] J.T. Allison, M. Kokkalaras, M. Zawislak, and P.Y. Papalambros. On the use of analytical target cascading and collaborative optimization for complex system design. In 6th World Conference on Structural and Multidisciplinary Optimization, May 30 June [3] S. Tosserams, L. F. P. Etman, P. Y. Papalambros, and J. E. Rooda. An augmented lagrangian relaxation for analytical target cascading using the alternating direction method of multipliers. Structural and Multidisciplinary Optimization, 31(3): , [4] H.M. Kim, N.F. Michelena, P.Y. Papalambros, and T. Jiang. Target cascading in optimal system design. Journal of Mechanical Design, 125: , September [5] S.C. Chapra and R.P. Canale. Numerical Methods for Engineers. McGraw-Hill, third edition, [6] P.Y. Papalambros and D.J. Wilde. Principles of Optimal Design: Modeling and Computation. Cambridge University Press, New York, second edition, [7] D.P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, Copyright c 2006 by James T. Allison 10

Introduction to System Partitioning and Coordination

Introduction to System Partitioning and Coordination Introduction to System Partitioning and Coordination James T. Allison ME 555 April 4, 2005 University of Michigan Department of Mechanical Engineering Your Subsystem Optimization Problems are Complete

More information

Introduction to System Optimization: Part 1

Introduction to System Optimization: Part 1 Introduction to System Optimization: Part 1 James Allison ME 555 March 7, 2007 System Optimization Subsystem optimization results optimal system? Objective: provide tools for developing system optimization

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

On Selecting Single-Level Formulations for Complex System Design Optimization

On Selecting Single-Level Formulations for Complex System Design Optimization James T. Allison Ph.D. Candidate e-mail: optimize@umich.edu Michael Kokkolaras Associate Research Scientist Mem. ASME e-mail: mk@umich.edu Panos Y. Papalambros Professor Fellow ASME e-mail: pyp@umich.edu

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

On the Use of Analytical Target Cascading and Collaborative Optimization for Complex System Design

On the Use of Analytical Target Cascading and Collaborative Optimization for Complex System Design 6 th World Congress on Structural and Multidisciplinary Optimization Rio de Janeiro, 30 May 03 June, 005 Brazil x On the Use of Analytical Target Cascading and Collaborative Optimization for Complex System

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Operations Research Lecture 6: Integer Programming

Operations Research Lecture 6: Integer Programming Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the

More information

Target Cascading: A Design Process For Achieving Vehicle Targets

Target Cascading: A Design Process For Achieving Vehicle Targets Target Cascading: A Design Process For Achieving Vehicle Targets Hyung Min Kim and D. Geoff Rideout The University of Michigan May 24, 2000 Overview Systems Engineering and Target Cascading Hierarchical

More information

Bilevel multiobjective optimization of vehicle layout

Bilevel multiobjective optimization of vehicle layout 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Bilevel multiobjective optimization of vehicle layout Paolo Guarneri 1, Brian Dandurand 2, Georges

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 ) Expectation Maximization (EM Algorithm Motivating Example: Have two coins: Coin 1 and Coin 2 Each has it s own probability of seeing H on any one flip. Let p 1 = P ( H on Coin 1 p 2 = P ( H on Coin 2 Select

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

QUADRATIC MAJORIZATION 1. INTRODUCTION

QUADRATIC MAJORIZATION 1. INTRODUCTION QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et

More information

Lecture 24 November 27

Lecture 24 November 27 EE 381V: Large Scale Optimization Fall 01 Lecture 4 November 7 Lecturer: Caramanis & Sanghavi Scribe: Jahshan Bhatti and Ken Pesyna 4.1 Mirror Descent Earlier, we motivated mirror descent as a way to improve

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009 Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Introduction to Continuous Systems. Continuous Systems. Strings, Torsional Rods and Beams.

Introduction to Continuous Systems. Continuous Systems. Strings, Torsional Rods and Beams. Outline of Continuous Systems. Introduction to Continuous Systems. Continuous Systems. Strings, Torsional Rods and Beams. Vibrations of Flexible Strings. Torsional Vibration of Rods. Bernoulli-Euler Beams.

More information

AA 242B/ ME 242B: Mechanical Vibrations (Spring 2016)

AA 242B/ ME 242B: Mechanical Vibrations (Spring 2016) AA 242B/ ME 242B: Mechanical Vibrations (Spring 2016) Homework #2 Due April 17, 2016 This homework focuses on developing a simplified analytical model of the longitudinal dynamics of an aircraft during

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Trip Distribution. Chapter Overview. 8.2 Definitions and notations. 8.3 Growth factor methods Generalized cost. 8.2.

Trip Distribution. Chapter Overview. 8.2 Definitions and notations. 8.3 Growth factor methods Generalized cost. 8.2. Transportation System Engineering 8. Trip Distribution Chapter 8 Trip Distribution 8. Overview The decision to travel for a given purpose is called trip generation. These generated trips from each zone

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Chapter 1. Trip Distribution. 1.1 Overview. 1.2 Definitions and notations Trip matrix

Chapter 1. Trip Distribution. 1.1 Overview. 1.2 Definitions and notations Trip matrix Chapter 1 Trip Distribution 1.1 Overview The decision to travel for a given purpose is called trip generation. These generated trips from each zone is then distributed to all other zones based on the choice

More information

Decoupling Coupled Constraints Through Utility Design

Decoupling Coupled Constraints Through Utility Design 1 Decoupling Coupled Constraints Through Utility Design Na Li and Jason R. Marden Abstract The central goal in multiagent systems is to design local control laws for the individual agents to ensure that

More information

Compositional Safety Analysis using Barrier Certificates

Compositional Safety Analysis using Barrier Certificates Compositional Safety Analysis using Barrier Certificates Department of Electronic Systems Aalborg University Denmark November 29, 2011 Objective: To verify that a continuous dynamical system is safe. Definition

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Multidisciplinary design optimisation methods for automotive structures

Multidisciplinary design optimisation methods for automotive structures Multidisciplinary design optimisation methods for automotive structures Rebecka Domeij Bäckryd, Ann-Britt Rydberg and Larsgunnar Nilsson The self-archived version of this journal article is available at

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396 Data Mining Linear & nonlinear classifiers Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1396 1 / 31 Table of contents 1 Introduction

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

MIT Manufacturing Systems Analysis Lecture 14-16

MIT Manufacturing Systems Analysis Lecture 14-16 MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Line Design Given a process, find the best set of machines

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional

More information

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via. Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n

More information

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems ELEC4631 s Lecture 2: Dynamic Control Systems 7 March 2011 Overview of dynamic control systems Goals of Controller design Autonomous dynamic systems Linear Multi-input multi-output (MIMO) systems Bat flight

More information

Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis

Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis Multiobjective Optimization Applied to Robust H 2 /H State-feedback Control Synthesis Eduardo N. Gonçalves, Reinaldo M. Palhares, and Ricardo H. C. Takahashi Abstract This paper presents an algorithm for

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Optimal Partitioning and Coordination Decisions in System Design Using an Evolutionary Algorithm

Optimal Partitioning and Coordination Decisions in System Design Using an Evolutionary Algorithm 7 th World Congress on Structural and Multidisciplinary Optimization COEX Seoul, 1 May - May 7, Korea Optimal Partitioning and Coordination Decisions in System Design Using an Evolutionary Algorithm James

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic

Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic Short title: Total FETI Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ-70833 Ostrava, Czech Republic mail: zdenek.dostal@vsb.cz fax +420 596 919 597 phone

More information

Distributed Optimization: Analysis and Synthesis via Circuits

Distributed Optimization: Analysis and Synthesis via Circuits Distributed Optimization: Analysis and Synthesis via Circuits Stephen Boyd Prof. S. Boyd, EE364b, Stanford University Outline canonical form for distributed convex optimization circuit intepretation primal

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last

More information

CONVERGENCE PROPERTIES OF ANALYTICAL TARGET CASCADING

CONVERGENCE PROPERTIES OF ANALYTICAL TARGET CASCADING 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization 4-6 September 2002, Atlanta, Georgia AIAA 2002-5506 CONVRGNC PROPRTIS OF ANAYTICA TARGT CASCADING Nestor Michelena Hyungju Park Panos

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Machine Learning And Applications: Supervised Learning-SVM

Machine Learning And Applications: Supervised Learning-SVM Machine Learning And Applications: Supervised Learning-SVM Raphaël Bournhonesque École Normale Supérieure de Lyon, Lyon, France raphael.bournhonesque@ens-lyon.fr 1 Supervised vs unsupervised learning Machine

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

Support Vector Machines, Kernel SVM

Support Vector Machines, Kernel SVM Support Vector Machines, Kernel SVM Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 27, 2017 1 / 40 Outline 1 Administration 2 Review of last lecture 3 SVM

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Multi-objective design and tolerance allocation for singleand

Multi-objective design and tolerance allocation for singleand DOI 10.1007/s10845-011-0608-3 Multi-objective design and tolerance allocation for singleand multi-level systems Tzu-Chieh Hung Kuei-Yuan Chan Received: 25 April 2010 / Accepted: 6 March 2011 Springer Science+Business

More information

Algebra Performance Level Descriptors

Algebra Performance Level Descriptors Limited A student performing at the Limited Level demonstrates a minimal command of Ohio s Learning Standards for Algebra. A student at this level has an emerging ability to A student whose performance

More information

Optimization Problems

Optimization Problems Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that

More information

Large Steps in Cloth Simulation. Safeer C Sushil Kumar Meena Guide: Prof. Parag Chaudhuri

Large Steps in Cloth Simulation. Safeer C Sushil Kumar Meena Guide: Prof. Parag Chaudhuri Large Steps in Cloth Simulation Safeer C Sushil Kumar Meena Guide: Prof. Parag Chaudhuri Introduction Cloth modeling a challenging problem. Phenomena to be considered Stretch, shear, bend of cloth Interaction

More information

Lecture Note 12: The Eigenvalue Problem

Lecture Note 12: The Eigenvalue Problem MATH 5330: Computational Methods of Linear Algebra Lecture Note 12: The Eigenvalue Problem 1 Theoretical Background Xianyi Zeng Department of Mathematical Sciences, UTEP The eigenvalue problem is a classical

More information

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems Chapter One Introduction 1.1 Large-Scale Interconnected Dynamical Systems Modern complex dynamical systems 1 are highly interconnected and mutually interdependent, both physically and through a multitude

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

Numerical Methods for Engineers, Second edition: Chapter 1 Errata

Numerical Methods for Engineers, Second edition: Chapter 1 Errata Numerical Methods for Engineers, Second edition: Chapter 1 Errata 1. p.2 first line, remove the Free Software Foundation at 2. p.2 sixth line of the first proper paragraph, fe95.res should be replaced

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Optimal Multilevel System Design under Uncertainty

Optimal Multilevel System Design under Uncertainty Optimal Multilevel System Design under Uncertainty 1 M. Kokkolaras (mk@umich.edu) Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan Z.P. Mourelatos (mourelat@oakland.edu)

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

What are Numerical Methods? (1/3)

What are Numerical Methods? (1/3) What are Numerical Methods? (1/3) Numerical methods are techniques by which mathematical problems are formulated so that they can be solved by arithmetic and logic operations Because computers excel at

More information

A New Trust Region Algorithm Using Radial Basis Function Models

A New Trust Region Algorithm Using Radial Basis Function Models A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Multi-level hierarchical MDO formulation with functional coupling satisfaction under uncertainty, application to sounding rocket design.

Multi-level hierarchical MDO formulation with functional coupling satisfaction under uncertainty, application to sounding rocket design. 11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Multi-level hierarchical MDO formulation with functional coupling satisfaction under uncertainty,

More information

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization Professor M. Chiang Electrical Engineering Department, Princeton University March 16, 2007 Lecture

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure

More information

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column

More information