Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter
|
|
- Myles O’Brien’
- 5 years ago
- Views:
Transcription
1 Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Thomas Bewley 23 May 2014 Southern California Optimization Day
2 Summary 1 Introduction 2 Nonlinear Model Predictive Control 3 Simulations 4 Conclusions
3 Summary 1 Introduction 2 Nonlinear Model Predictive Control 3 Simulations 4 Conclusions
4 Wave Energy Converters (WECs) Pros Green and emission-free Abundant and widely available Reliable and predictable Cons High installation and maintenance costs Impact on the marine ecosystem
5 Dynamic model of a point-absorber wave energy converter Dynamic equations ma(t) + r v(t) + k p(t) = F R (t) + F D (t) + u(t) + F E (t) Remarks m and r and the mass and viscous dissipation of the device and mooring system k is the hydrodynamic stiffness due to to the buoyancy force
6 Dynamic model of a point-absorber wave energy converter Dynamic equations ma(t) + r v(t) + k p(t) = F R (t) + F D (t) + u(t) + F E (t) Radiation Force Remarks m and r and the mass and viscous dissipation of the device and mooring system k is the hydrodynamic stiffness due to to the buoyancy force
7 Dynamic model of a point-absorber wave energy converter Dynamic equations ma(t) + r v(t) + k p(t) = F R (t) + F D (t) + u(t) + F E (t) Nonlinear Drag Force Remarks m and r and the mass and viscous dissipation of the device and mooring system k is the hydrodynamic stiffness due to to the buoyancy force
8 Dynamic model of a point-absorber wave energy converter Dynamic equations ma(t) + r v(t) + k p(t) = F R (t) + F D (t) + u(t) + F E (t) Machinery Force Remarks m and r and the mass and viscous dissipation of the device and mooring system k is the hydrodynamic stiffness due to to the buoyancy force
9 Dynamic model of a point-absorber wave energy converter Dynamic equations ma(t) + r v(t) + k p(t) = F R (t) + F D (t) + u(t) + F E (t) Excitation Force Remarks m and r and the mass and viscous dissipation of the device and mooring system k is the hydrodynamic stiffness due to to the buoyancy force
10 Hydrodynamic forces t F R (t) = m a(t) k R (t τ)v(τ)dτ = m a(t) F r (t) F D (t) = 1 2 C D ρ S (v(t)) 2 sgn(v(t)) F E (t) = + k E (t τ)η(τ)dτ Remarks F D (t) is a nonlinear and non-smooth function of the device velocity m is the added mass, k R (t) is the causal radiation kernel k E (t) is the noncausal excitation kernel, η(t) is the wave elevation at the device location
11 State-space model State-space realization of F r (t) ż(t) = A p z(t) + B p v(t) F r (t) = C p z(t) + D P v(t) State-space WEC model ẋ = Ax(t) + f D (x(t)) + Bu(t) + E F E (t) Remarks u(t) is the control variable F E (t) is supposed to be known in advance or predicted through estimation techniques
12 Summary 1 Introduction 2 Nonlinear Model Predictive Control 3 Simulations 4 Conclusions
13 MPC formulation The goal is optimizing the WEC power take-off by maximizing the energy absorption over the interval [t 0, t 0 + T]: maxe a = max t0 +T t 0 v(t)u(t)dt subject to mechanical and dynamics constraints. This reduces to the solution of the following nonlinear optimization problem: min 1 2 t0 +T t 0 subject to x(t 0 ) = x 0 x T S T v u + ut S v xdt ẋ(t) = Ax(t) + f D (x(t)) + Bu(t) + E F E (t), t [t 0, t 0 + T] d(x(t), u(t)) 0, t [t 0, t 0 + T]
14 Problem discretization The time interval is divided into N shooting intervals. In each interval, the following conditions must be imposed: u k (t) = u k, t [t k, t k+1 ] F E k (t) = F E k, t [t k, t k+1 ] ẋ k (t) = Ax k (t) + f D (x k (t)) + Bu k + E F E k (t), t [t k, t k+1 ] x k (t k+1 ) = x k+1 d(x k, u k ) 0 The nonlinear dynamic equations are discretized using an explicit fourth-order Runge-Kutta scheme
15 Nonlinear Constrained Programming (1/2) The discretized problem now appears as min 1 2 N 1 k=0 subject to x T k ST v u k + u T k S vx k x 0 = x 0 s x k + t b i K RK i x k+1 = 0, k [0, N 1] i=1 ( ( ) i 1 = A x k + t a ij K RK i 1 j )+f D x k + t a ij K RK j + Bu k + E F E k K RK i j=1 d(x k, u k ) 0, k [0, N 1] j=1
16 Nonlinear Constrained Programming (2/2) Define: w = [u T x T ] T The optimization problem becomes: minj(w) = min 1 2 wt Hw subject to C(w) + Λ x 0 = 0 D(w) 0 where: H = [ 0 ] S v S v T 0 The bar sign accounts for the discretization of the time integral
17 Sequential Quadratic Programming (SQP) Define an initial guess for the optimization variable and the Lagrange multipliers (w 0, λ 0, µ 0 ) At each iteration k, starting from (w k, λ k, µ k ), solve the following quadratic programming problem (QP): min w,λ,µ subject to 1 2 wt B k w + b T k w C wk w + C wk = 0 D wk w + D wk 0 where b k = J(w) wk is the gradient of the cost function and B k is the Hessian of the associated Lagrangian function: B k = H i λ i 2 C i wk j µ j 2 D j wk Perform the update (w k+1, λ k+1, µ k+1 ) = (w k + α w, λ, µ ) Repeat until convergence
18 Implementation features Gradients can be computed analytically or numerically. Many numerical approaches are available: Automatic Differentiation Adjoint Gradient Finite Differences Complex Derivative For the WEC implementation all gradients are computed analytically. The Hessian of the Lagrangian is calculated analytically as well A full step implementation, i.e. α = 1, is considered The Hessian of the cost function is indefinite Inequality constraints involve motion constraints of the device and saturation constraints of the actuator: u min u k u max p min S p x k p max v min S v x k v max
19 Implementation features Gradients can be computed analytically or numerically. Many numerical approaches are available: Automatic Differentiation Adjoint Gradient Finite Differences Complex Derivative For the WEC implementation all gradients are computed analytically. The Hessian of the Lagrangian is calculated analytically as well A full step implementation, i.e. α = 1, is considered The Hessian of the cost function is indefinite Inequality constraints involve motion constraints of the device and saturation constraints of the actuator: u min u k u max p min S p x k p max v min S v x k v max
20 Implementation features Gradients can be computed analytically or numerically. Many numerical approaches are available: Automatic Differentiation Adjoint Gradient Finite Differences Complex Derivative For the WEC implementation all gradients are computed analytically. The Hessian of the Lagrangian is calculated analytically as well A full step implementation, i.e. α = 1, is considered The Hessian of the cost function is indefinite Inequality constraints involve motion constraints of the device and saturation constraints of the actuator: u min u k u max p min S p x k p max v min S v x k v max
21 Implementation features Gradients can be computed analytically or numerically. Many numerical approaches are available: Automatic Differentiation Adjoint Gradient Finite Differences Complex Derivative For the WEC implementation all gradients are computed analytically. The Hessian of the Lagrangian is calculated analytically as well A full step implementation, i.e. α = 1, is considered The Hessian of the cost function is indefinite Inequality constraints involve motion constraints of the device and saturation constraints of the actuator: u min u k u max p min S p x k p max v min S v x k v max
22 Handling the drag term The calculation of the gradient of the equality constraints requires differentiability of the dynamic equations. The drag force, due to the sgn function is non-smooth: F D (t) = 1 2 C D ρ S (v(t)) 2 sgn(v(t)) A smooth approximation of the drag term is: F D (t) = 1 2 C D ρ S (v(t)) 2 tanh(k v(t)) where K is a parameter governing the degree of smoothness. The dynamic equations are now twice differentiable
23 Handling the drag term The calculation of the gradient of the equality constraints requires differentiability of the dynamic equations. The drag force, due to the sgn function is non-smooth: F D (t) = 1 2 C D ρ S (v(t)) 2 sgn(v(t)) A smooth approximation of the drag term is: F D (t) = 1 2 C D ρ S (v(t)) 2 tanh(k v(t)) where K is a parameter governing the degree of smoothness. The dynamic equations are now twice differentiable
24 Condensed SQP Considerations The KKT matrix arising from the QP problem has a very sparse nature This sparsity can be exploited by projecting the cost function to the null space of the equality constraints This allows to remove the dependent variable x, which accounts for the state perturbation, and solve an optimization problem in u only This reduces the optimization space and produces a dense KKT matrix
25 Condensed SQP Looking at the structure of C wk : I + t B C wk = I I + t s i b i x0 K i RK wk I I + t B I + t s i b i xn 1 K i RK wk I Through an appropriate permutation matrix P, it is possible to rewrite the gradient as P C wk = [ L I ] This allows to rewrite the equality constraint in the QP iteration as: x = L u + P C wk Substituting this relationship everywhere gives the reduced dense QP problem: min 1 2 ut H u u subject to d u (u) 0 H u is now positive definite, hence convergence is ensured
26 Condensed SQP Looking at the structure of C wk : I + t B C wk = I I + t s i b i x0 K i RK wk I I + t B I + t s i b i xn 1 K i RK wk I Through an appropriate permutation matrix P, it is possible to rewrite the gradient as P C wk = [ L I ] This allows to rewrite the equality constraint in the QP iteration as: x = L u + P C wk Substituting this relationship everywhere gives the reduced dense QP problem: min 1 2 ut H u u subject to d u (u) 0 H u is now positive definite, hence convergence is ensured
27 Condensed SQP Looking at the structure of C wk : I + t B C wk = I I + t s i b i x0 K i RK wk I I + t B I + t s i b i xn 1 K i RK wk I Through an appropriate permutation matrix P, it is possible to rewrite the gradient as P C wk = [ L I ] This allows to rewrite the equality constraint in the QP iteration as: x = L u + P C wk Substituting this relationship everywhere gives the reduced dense QP problem: min 1 2 ut H u u subject to d u (u) 0 H u is now positive definite, hence convergence is ensured
28 Summary 1 Introduction 2 Nonlinear Model Predictive Control 3 Simulations 4 Conclusions
29 Results for sine wave with H m0 = 2m, T 0 = 8s, T = 10s (1/3) Absorbed power x 10 Machinery Force P a [kw] 1000 F m [N] t [s] p [m] WEC Position t [s] v [m/s] t [s] WEC Velocity t [s] Constraints are u 10 7 N, p 3m/s, v 5m/s
30 Results for sine wave with H m0 = 2m, T 0 = 8s, T = 10s (2/3) Absorbed power x 10 Machinery Force P a [kw] 500 F m [N] p [m] t [s] WEC Position t [s] v [m/s] t [s] WEC Velocity t [s] Constraints are u 10 7 N, p 1m/s, v 5m/s
31 Results for sine wave with H m0 = 2m, T 0 = 8s, T = 10s (3/3) 1000 Absorbed power 1 6 x 10 Machinery Force P a [kw] F m [N] p [m] t [s] WEC Position t [s] v [m/s] t [s] WEC Velocity t [s] Constraints are u 10 6 N, p 1m/s, v 5m/s
32 Results for JONSWAP sea spectrum with H m0 = 2m, T 0 = 8s, T = 10s 2 Wave elevation η [m] t [s] Constraints are u 10 6 N, p 1m/s, v 5m/s
33 Results for JONSWAP sea spectrum with H m0 = 2m, T 0 = 8s, T = 10s 1000 Absorbed power 1 6 x 10 Machinery Force P a [kw] F m [N] p [m] t [s] WEC Position t [s] v [m/s] t [s] WEC Velocity t [s] Constraints are u 10 6 N, p 1m/s, v 5m/s
34 Budal s diagram for sine waves with H m0 = 2m, T = 10s Legend: RL = Resistive Loading, ACC = Approximate Complex-Conjugate Control, AVT = Approximate Optimal Velocity Tracking, MPC = Model Predictive Control, PML = Peak-Matching Latching Control, PMC = Peak-Matching Clutching Control
35 Summary 1 Introduction 2 Nonlinear Model Predictive Control 3 Simulations 4 Conclusions
36 Conclusions Direct multiple shooting provides a fast way of solving constrained MPC problems involving nonlinear systems The condensing step further accelerates the solution of the QP problem Nonlinear MPC applied to the maximization of power take-off of a wave energy converter allows to approach the theoretical limit while outperforming other techniques This approach can easily be extended to any other WEC configuration and constraints
Constrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationDirect Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.
Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationNumerical Optimal Control Overview. Moritz Diehl
Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationThe estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation
Mathematical Sciences Institute Australian National University HPSC Hanoi 2006 Outline The estimation problem ODE stability The embedding method The simultaneous method In conclusion Estimation Given the
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationThe Direct Transcription Method For Optimal Control. Part 2: Optimal Control
The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationALADIN An Algorithm for Distributed Non-Convex Optimization and Control
ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of
More informationMachine Learning A Geometric Approach
Machine Learning A Geometric Approach CIML book Chap 7.7 Linear Classification: Support Vector Machines (SVM) Professor Liang Huang some slides from Alex Smola (CMU) Linear Separator Ham Spam From Perceptron
More informationStatistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationSufficient Conditions for Finite-variable Constrained Minimization
Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationUVA CS 4501: Machine Learning
UVA CS 4501: Machine Learning Lecture 16 Extra: Support Vector Machine Optimization with Dual Dr. Yanjun Qi University of Virginia Department of Computer Science Today Extra q Optimization of SVM ü SVM
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationNumerical Methods for Embedded Optimization and Optimal Control. Exercises
Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationAn Introduction to Model-based Predictive Control (MPC) by
ECE 680 Fall 2017 An Introduction to Model-based Predictive Control (MPC) by Stanislaw H Żak 1 Introduction The model-based predictive control (MPC) methodology is also referred to as the moving horizon
More informationContents Economic dispatch of thermal units
Contents 2 Economic dispatch of thermal units 2 2.1 Introduction................................... 2 2.2 Economic dispatch problem (neglecting transmission losses)......... 3 2.2.1 Fuel cost characteristics........................
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationLecture 9: Large Margin Classifiers. Linear Support Vector Machines
Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Perceptrons Definition Perceptron learning rule Convergence Margin & max margin classifiers (Linear) support vector machines Formulation
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationAn Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84
An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationSingle Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir
Downloaded from orbit.dtu.dk on: Dec 2, 217 Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir Capolei, Andrea; Völcker, Carsten; Frydendall, Jan; Jørgensen, John Bagterp
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada URL: http://www.math.mcmaster.ca/bprotas Rencontres
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationFeedback Control of Turbulent Wall Flows
Feedback Control of Turbulent Wall Flows Dipartimento di Ingegneria Aerospaziale Politecnico di Milano Outline Introduction Standard approach Wiener-Hopf approach Conclusions Drag reduction A model problem:
More informationSupport Vector Machine (continued)
Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need
More informationCOMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37
COMP 652: Machine Learning Lecture 12 COMP 652 Lecture 12 1 / 37 Today Perceptrons Definition Perceptron learning rule Convergence (Linear) support vector machines Margin & max margin classifier Formulation
More informationBlock Condensing with qpdunes
Block Condensing with qpdunes Dimitris Kouzoupis Rien Quirynen, Janick Frasch and Moritz Diehl Systems control and optimization laboratory (SYSCOP) TEMPO summer school August 5, 215 Dimitris Kouzoupis
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationStochastic dynamical modeling:
Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationSwing-Up Problem of an Inverted Pendulum Energy Space Approach
Mechanics and Mechanical Engineering Vol. 22, No. 1 (2018) 33 40 c Lodz University of Technology Swing-Up Problem of an Inverted Pendulum Energy Space Approach Marek Balcerzak Division of Dynamics Lodz
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationSuboptimal Open-loop Control Using POD. Stefan Volkwein
Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,
More informationOptimal Control Theory
Optimal Control Theory The theory Optimal control theory is a mature mathematical discipline which provides algorithms to solve various control problems The elaborate mathematical machinery behind optimal
More informationOptimizing Economic Performance using Model Predictive Control
Optimizing Economic Performance using Model Predictive Control James B. Rawlings Department of Chemical and Biological Engineering Second Workshop on Computational Issues in Nonlinear Control Monterey,
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationMPC Infeasibility Handling
MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time
More informationSubgradient methods for huge-scale optimization problems
Subgradient methods for huge-scale optimization problems Yurii Nesterov, CORE/INMA (UCL) May 24, 2012 (Edinburgh, Scotland) Yu. Nesterov Subgradient methods for huge-scale problems 1/24 Outline 1 Problems
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationThese slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More informationTrajectory Planning and Collision Detection for Robotics Applications
Trajectory Planning and Collision Detection for Robotics Applications joint work with Rene Henrion, Dietmar Homberg, Chantal Landry (WIAS) Institute of Mathematics and Applied Computing Department of Aerospace
More informationImplicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks
Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Carl-Johan Thore Linköping University - Division of Mechanics 581 83 Linköping - Sweden Abstract. Training
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationPOD for Parametric PDEs and for Optimality Systems
POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationA New Penalty-SQP Method
Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationEfficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation
Efficient Numerical Methods for Nonlinear MPC and Moving Horizon Estimation Moritz Diehl, Hans Joachim Ferreau, and Niels Haverbeke Optimization in Engineering Center (OPTEC) and ESAT-SCD, K.U. Leuven,
More informationParallelizing large scale time domain electromagnetic inverse problem
Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,
More informationAn introduction to PDE-constrained optimization
An introduction to PDE-constrained optimization Wolfgang Bangerth Department of Mathematics Texas A&M University 1 Overview Why partial differential equations? Why optimization? Examples of PDE optimization
More informationDynamics and Control of the GyroPTO Wave Energy Point Absorber under Sea Waves
Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 99 (7) 88 8 X International Conference on Structural Dynamics, EURODYN 7 Dynamics and Control of the GyroPTO Wave Energy Point
More informationNonlinear optimization
Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11
More informationInput-to-state stability and interconnected Systems
10th Elgersburg School Day 1 Input-to-state stability and interconnected Systems Sergey Dashkovskiy Universität Würzburg Elgersburg, March 5, 2018 1/20 Introduction Consider Solution: ẋ := dx dt = ax,
More informationReview for Exam 2 Ben Wang and Mark Styczynski
Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note
More informationCHAPTER 10: Numerical Methods for DAEs
CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct
More information10-725/ Optimization Midterm Exam
10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationConjugate gradient algorithm for training neural networks
. Introduction Recall that in the steepest-descent neural network training algorithm, consecutive line-search directions are orthogonal, such that, where, gwt [ ( + ) ] denotes E[ w( t + ) ], the gradient
More informationNumerical Modeling of a Wave Energy Point Absorber Hernandez, Lorenzo Banos; Frigaard, Peter Bak; Kirkegaard, Poul Henning
Aalborg Universitet Numerical Modeling of a Wave Energy Point Absorber Hernandez, Lorenzo Banos; Frigaard, Peter Bak; Kirkegaard, Poul Henning Published in: Proceedings of the Twenty Second Nordic Seminar
More informationNONLINEAR BEHAVIOR OF A SINGLE- POINT MOORING SYSTEM FOR FLOATING OFFSHORE WIND TURBINE
NONLINEAR BEHAVIOR OF A SINGLE- POINT MOORING SYSTEM FOR FLOATING OFFSHORE WIND TURBINE Ma Chong, Iijima Kazuhiro, Masahiko Fujikubo Dept. of Naval Architecture and Ocean Engineering OSAKA UNIVERSITY RESEARCH
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationLMS Algorithm Summary
LMS Algorithm Summary Step size tradeoff Other Iterative Algorithms LMS algorithm with variable step size: w(k+1) = w(k) + µ(k)e(k)x(k) When step size µ(k) = µ/k algorithm converges almost surely to optimal
More informationParameter Estimation in a Moving Horizon Perspective
Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE
More informationNumerical Methods for PDE-Constrained Optimization
Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,
More informationInexact Newton-Type Optimization with Iterated Sensitivities
Inexact Newton-Type Optimization with Iterated Sensitivities Downloaded from: https://research.chalmers.se, 2019-01-12 01:39 UTC Citation for the original published paper (version of record: Quirynen,
More informationComputational Optimization. Augmented Lagrangian NW 17.3
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationFPGA Implementation of a Predictive Controller
FPGA Implementation of a Predictive Controller SIAM Conference on Optimization 2011, Darmstadt, Germany Minisymposium on embedded optimization Juan L. Jerez, George A. Constantinides and Eric C. Kerrigan
More informationLecture 16: October 22
0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationCoordinate Descent and Ascent Methods
Coordinate Descent and Ascent Methods Julie Nutini Machine Learning Reading Group November 3 rd, 2015 1 / 22 Projected-Gradient Methods Motivation Rewrite non-smooth problem as smooth constrained problem:
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationWorksheet 8 Sample Solutions
Technische Universität München WS 2016/17 Lehrstuhl für Informatik V Scientific Computing Univ.-Prof. Dr. M. Bader 19.12.2016/21.12.2016 M.Sc. S. Seckler, M.Sc. D. Jarema Worksheet 8 Sample Solutions Ordinary
More informationNonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity
Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility
More information