Multidisciplinary System Design Optimization (MSDO)

Size: px
Start display at page:

Download "Multidisciplinary System Design Optimization (MSDO)"

Transcription

1 oday s opics Multidisciplinary System Design Optimization (MSDO) Approimation Methods Lecture 9 6 April 004 Design variable lining Reduced-Basis Methods Response Surface Approimations Kriging Variable-Fidelity Models Karen Willco Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Why Approimation Methods? We have seen throughout the course the constant trade-off between computational cost and fidelity. high fidelity (e.g. CFD,FEM) intermediate fidelity (e.g. vorte lattice, beam theory) empirical models Fidelity Level can we do better? trade studies increasing difficulty Level of MSDO limited optimization/iteration how to implement? can the results be believed? full MDO from Giesing, 998 Approimation methods provide a way to get high-fidelity model information throughout the optimization without the computational epense. 3 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Approimation Methods Recall that the analysis or simcode must be invoed each time the optimizer selects a new design vector to try ypically, hundreds (thousands) of design vectors will be analyzed throughout an optimization run Can use Approimation Models (Surrogate Models) for objective functions and constraints If approimate models are inepensive to evaluate, can analyze many more design vector options without worrying about computational resources Concept first introduced in structural optimization by Barthelemy and Hafta, Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

2 Approimation Methods Overview Design variable lining Reduced-basis methods Response surface methods Kriging Variable-fidelity methods reduce the number of design variables in the optimization code simcode analysis full order same number of design variables simcode analysis simplified combine high-fidelity and approimation models 5 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Design Variable Lining Not all design variables may be independent For eample, symmetry may eist in the problem Define: y = + y C y = + y C r r n n r optimizer uses y, but provides to simcode for analysis 6 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Reduced-Basis Methods Consider r feasible design vectors:,,..., r We could consider the desired design to be a linear combination of these basis vectors: Reduced-Basis Methods We can now optimize J() by finding the optimal values for the coefficients α i. dimension n dimension r r * α ι i = i C = + scalar coefficient basis vector added for generality do one full-order evaluation of resulting answer approach is efficient if r << n will give the true optimum only if * lies in the span of { i } basis vectors could be previous designs solutions over a particular range (DoE) derived in some other way (POD) 7 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco 8 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

3 Reduced-Basis Eample Eample using a reduced-basis approach (van der Plaats Fig 7-): airfoil design for a unique application. many airfoil shapes with nown performance are available design variables are (,y) coordinates at chordwise locations (n~00) use four basis airfoil shapes (low-speed airfoils) which contain the n geometry points plus two basis shapes which allow trailing edge thicness to vary r=6 (r<<n) optimize for high speed, maimum lift with a constraint on drag 9 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Reduced-Basis Eample Vanderplaats, G. N. Numerical Optimization echniques for Engineering Design. Vanderplaats R&D, 999. Figure 7-. Vanderplaats, G. N. Numerical Optimization echniques for Engineering Design. Vanderplaats R&D, 999. Figure Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Proper Orthogonal Decomposition Also nown as principal components analysis, Kahunen Loève decomposition, singular value decomposition r = * ι i = i α ϕ he r basis vectors ϕ i are orthogonal hey are computed from M empirical solutions {,,... M } hey are optimal in the sense that they minimize the error between the original and the projected data ma ϕ Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco (, ϕ ) ϕ Proper Orthogonal Decomposition hese optimal basis functions can be calculated by:. Evaluating the correlation matri: i j R ij =. Solving the M M eigenvalue problem: R v = λ v i i i 3. Constructing the basis vectors: j M ϕ = v i = i j i Use components of the jth eigenvector to calculate the jth POD basis vector. he jth eigenvalue tells us how important is the jth basis vector. Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

4 Approimation Methods Overview Design variable lining Reduced-basis methods Response surface methods Kriging Variable-fidelity methods reduce the number of design variables in the optimization code simcode analysis full order same number of design variables simcode analysis simplified combine high-fidelity and approimation models 3 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Response Surface Methodology Keep the same number of design variables, but simplify the simcode analysis Create approimating functions to objective and constraints Optimize using the approimations Update approimations using current optimal solution guess and repeat Response surfaces are smooth even if design space is noisy Polynomial-based modeling technique Provide compact, eplicit functional relationship between response and design variables Least squares is computationally inepensive and easy to implement 4 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Local Approimations Most common are aylor series epansions: J ( ) = J ( ) + J ( ) δ + δ H ( ) δ + " 0 δ = Could use the first two terms linear approimation Solve, reanalyze and repeat = sequential linear programming Could also include quadratic term: update requires: J ( ) = J ( ) + J ( ) δ + δ H ( ) δ + " function evaluation n function evaluations n(n+)/ function evaluations 5 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Local Approimations It is epensive to update the gradient vector and Hessian matri One approach: perform several approimation cycles updating only the constant term then update the linear term and repeat finally, update the Hessian only when no other progress can be made If the design space is highly nonlinear, there is no guarantee that this approach will wor 6 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

5 Response Surface Methodology Another approach: use what information is available to create the approimation Use this approimation to mae a small move in the design variables Analyze the result precisely new function evaluation Use the new function evaluation to improve the approimation to the design space Fit a response surface Can use a quadratic or higher order surface Might choose to use only some of the function evaluations (e.g. those in a local neighborhood) 7 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco RSM 0 eg.. define J = J ( ) J( ) quadratic approimation: J = J δ + δ δ H J J J = δ + δ δ n n + ( H δ + H δ nn H δ n ) + H δ δ + H 3 δ δ n H δ δ n + H δ δ H δ δ 3 3 n, n n n (all derivatives and entries of H are evaluated at 0 ) ( ) 8 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco RSM Assume we have evaluated the baseline plus q designs: 0 q 0,,..., J, J,... J We could write q equations of the form ( ) using 0 (, 0 ), 0 (, ),, q (, q J J J J " J J ) here is a total of N=n+n(n+)/ unnowns: J J J,,...,, H, H,..., H nn n q RSM q equations, N unnowns If q<n, only some coefficients can be calculated If q>n, use weighted least squares Weight designs closer to current q more heavily In general, use q n+ initial designs so an initial linear approimation can be provided If have baseline plus n designs, can fit a linear approimation in each direction (i.e. a hyperplane) If have more solutions, can fit a quadratic or higher order surface 9 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco 0 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

6 RSM In other words, fit objective function with a polynomial e.g. quadratic approimation: J b c c ( ) = a 0 + i i + ii i + ij i j i i i, j < i Update model by including a new function evaluation then doing least squares fit to compute the new coefficients Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Estimation problem: RSM J Xc M = J J J J=vector containing " " M responses # # X = M M M M J " Least squares solution: ( ) c = X X X J c 0 c c = c p c=vector containing p coefficients =M p matri containing M design vector inputs as rows. Columns depend on approimation. Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Kriging RSM tends to create a global model of the design space (especially if all points weighted equally in the LS) May be of limited accuracy when multiple etrema eist (especially quadratic polynomial models) Kriging : combine global model of design space with a model for local deviations which interpolates sample points Developed in fields of spatial statistics and geostatistics Original statistical techniques developed by mining engineer D.G. Krige Interpolation-based modeling technique Computationally more epensive and not as simple to implement as RSM 3 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Kriging Unnown function to be modeled epressed as: J ( ) = f ( ) + Z( ) nown function global model for design space e.g. use RSM to fit f() using M observations Gaussian random function zero mean variance σ localized deviation from global model cov Z ( i ), Z ( j ) = σ R R( i, j ) covariance matri of Z() correlation matri correlation function 4 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

7 Approimation Methods Overview Design variable lining Reduced-basis methods Response surface methods Kriging Variable-fidelity methods reduce the number of design variables in the optimization code simcode analysis full order same number of design variables simcode analysis simplified combine high-fidelity and approimation models 5 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Initialization High-fidelity model Approimation model Variable Fidelity Models Use information from high-fidelity model to chec approimate designs Optimize using approimate model Recalibrate using high-fidelity model recourse to detailed model J(), g() Optimizer From: Fig., Aleandrov et al. 6 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco optimization on a simplified model Variable Fidelity Models Questions: What do we do when the design derived from the lowfidelity optimization does not provide an improvement in the true objective? How can we use information about the predictive capability of the approimation model to decide when to go bac to the high-fidelity model? How do we decide when to update the approimate model? Variable Fidelity Models When optimization with approimation model is unsuccessful, there are two possible approaches:. Improve the fidelity of the approimate model. Do less optimization One option: use a trust region approach trust region current iterate 7 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco design space 8 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

8 rust Region Approach Classic approach: Regulate the length of the steps taen by the iterative optimization algorithm Regulate based on quality of current approimation model e.g. Quadratic aylor series model: J ( + s ) q ( + s ) = J ( ) + J( ) s + s B s where s is the prospective step in the design variables,, and B is a model of the Hessian matri at. 9 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco rust Region Approach Restrict the step size to a region in which we trust the quadratic model to approimate J well Done by adding a constraint on the step trust region subproblem min q ( + s ) s.t. s δ In practice, variables are scaled to improve performance (step size can vary in different directions) hen decide whether to accept the step: + + s if J ( + s ) < J( ) = otherwise 30 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco rust Region Update rust radius, δ, is updated adaptively Update depends on predictive quality of quadratic model: if model did a good job of predicting J, or if there was more improvement than predicted, then increase δ if model did a bad job of predicting J (J increased or decrease was much lower than predicted), then decrease δ if model did an acceptable job of predicting J, then do not change δ rust Region Update Numerically, compare actual and predicted decrease: J ( ) J ( + s ) r = J ( ) q ( + s ) Define constants r, r (r <r ) and apply the rules: if r< r, decrease the trust radius if r>r, increase the trust radius ypical values are r =0., r =0.75 Note that prediction of ascent/descent is more important than prediction of actual value 3 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco 3 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

9 Classic rust Region Algorithm 0 0 Choose \ n and δ > 0 For =0,,... until convergence do { Find an approimate solution s to the subproblem: min q ( + s ) s.t. s δ Compare the actual and predicted decrease: J ( ) J ( + s ) r = J ( ) q ( + s ) Update and δ } From Fig., Aleandrov et al. 33 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco β-correlation Method Basic idea is to tae low-fidelity approimate model, J a, and correct it by scaling Define the scale factor: J ( ) β = J ( ) a Given the current design, build a first order model of β about : β ( ) = β ( ) + β( )( ) Optimize with approimate model and use local model of β to scale result : J ( ) β J a ( ) 34 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco Variable Compleity Model VCM isight uses a Variable Compleity Model (VCM) Use two simcodes for the same physical process: () more accurate, longer running (eact) J() () less accurate, shorter running (appro.) J a () Compute a multiplicative or additive correction factor, σ : 0 J ( ) σ = 0 J a ( ) J ( ) = σ J ( ) a or 0 0 σ = J ( ) J a ( ) J ( ) = σ + J ( ) 35 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco a Optimization Optimization Simcode J() Conventional optimization with simcode Appro. J a () Conventional optimization with approimate model Optimization Appro. J a () Simcode J() Optimize with approimate model, update with simcode:. Run both models, calculate σ 0. Optimize using approimate model and σ 0 3. Update correction factor using simcode and repeat 36 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

10 Lecture Summary Approimation methods are one way to capture high-fidelity responses without the computational cost Design variable lining if physical problem allows Reduced-basis methods use low-order representation of the design vector use previous designs, DoE or POD Response surface methodology use polynomial models weighted least squares Kriging interpolation models global + local behavior Variable-fidelity models rust region approach β-correlation 37 Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco References Barthelemy, J-F. M. and Hafta, R.., Approimation concepts for optimum structural design a review, Structural Optimization, 5:9-44, 993. Giunta, A.A. and Watson, L.., A comparison of approimation modeling techniques: polynomial versus interpolating models, AIAA Paper , 998. LeGresley, P.A. and Alonso, J.J., Airfoil design optimization using reduced order models based on proper orthogonal decomposition, AIAA Paper Aleandrov, N., Dennis, J.E., Lewis, R.M. and orczon, V., A trust region framewor for managing the use of approimation models in optimization, NASA CR-0745, ICASE Report No , October 997. Practical Optimization, P.E. Gill, W. Murray and M.H. Wright, Academic Press, 986. Numerical Optimization echniques for Engineering Design, G.N. Vanderplaats, Vanderplaats R&D, Massachusetts Institute of echnology - Prof. de Wec and Prof. Willco

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numerical Methods or Engineering Design and Optimization Xin Li Department o ECE Carnegie Mellon University Pittsburgh, PA 53 Slide Overview Linear Regression Ordinary least-squares regression Minima

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Lecture 12 Eigenvalue Problem. Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method

Lecture 12 Eigenvalue Problem. Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method Lecture Eigenvalue Problem Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method Eigenvalue Eigenvalue ( A I) If det( A I) (trivial solution) To obtain

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

There is a unique function s(x) that has the required properties. It turns out to also satisfy

There is a unique function s(x) that has the required properties. It turns out to also satisfy Numerical Analysis Grinshpan Natural Cubic Spline Let,, n be given nodes (strictly increasing) and let y,, y n be given values (arbitrary) Our goal is to produce a function s() with the following properties:

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

A New Trust Region Algorithm Using Radial Basis Function Models

A New Trust Region Algorithm Using Radial Basis Function Models A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations

More information

v are uncorrelated, zero-mean, white

v are uncorrelated, zero-mean, white 6.0 EXENDED KALMAN FILER 6.1 Introduction One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

Applications of Proper Orthogonal Decomposition for Inviscid Transonic Aerodynamics

Applications of Proper Orthogonal Decomposition for Inviscid Transonic Aerodynamics Applications of Proper Orthogonal Decomposition for Inviscid Transonic Aerodnamics Bui-Thanh Tan, Karen Willco and Murali Damodaran Abstract Two etensions to the proper orthogonal decomposition (POD) technique

More information

Lecture 8 Optimization

Lecture 8 Optimization 4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional

More information

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)

More information

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Lecture 9: SVD, Low Rank Approximation

Lecture 9: SVD, Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience

Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience Sergey Yu. Kamensky 1, Vladimir F. Boykov 2, Zakhary N. Khutorovsky 3, Terry K. Alfriend 4 Abstract

More information

Issues and Techniques in Pattern Classification

Issues and Techniques in Pattern Classification Issues and Techniques in Pattern Classification Carlotta Domeniconi www.ise.gmu.edu/~carlotta Machine Learning Given a collection of data, a machine learner eplains the underlying process that generated

More information

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4 Optimization Methods: Optimization using Calculus - Equality constraints Module Lecture Notes 4 Optimization of Functions of Multiple Variables subect to Equality Constraints Introduction In the previous

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

SOLVING QUADRATICS. Copyright - Kramzil Pty Ltd trading as Academic Teacher Resources

SOLVING QUADRATICS. Copyright - Kramzil Pty Ltd trading as Academic Teacher Resources SOLVING QUADRATICS Copyright - Kramzil Pty Ltd trading as Academic Teacher Resources SOLVING QUADRATICS General Form: y a b c Where a, b and c are constants To solve a quadratic equation, the equation

More information

Minimization of Static! Cost Functions!

Minimization of Static! Cost Functions! Minimization of Static Cost Functions Robert Stengel Optimal Control and Estimation, MAE 546, Princeton University, 2017 J = Static cost function with constant control parameter vector, u Conditions for

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

Math 60. Rumbos Spring Solutions to Assignment #17

Math 60. Rumbos Spring Solutions to Assignment #17 Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider

More information

18.303: Introduction to Green s functions and operator inverses

18.303: Introduction to Green s functions and operator inverses 8.33: Introduction to Green s functions and operator inverses S. G. Johnson October 9, 2 Abstract In analogy with the inverse A of a matri A, we try to construct an analogous inverse  of differential

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Minimum volume of the longitudinal fin with rectangular and triangular profile by a modified Newton-Raphson method Nguyen Quan¹,

Minimum volume of the longitudinal fin with rectangular and triangular profile by a modified Newton-Raphson method Nguyen Quan¹, Minimum volume of the longitudinal fin with rectangular and triangular profile by a modified Newton-Raphson method Nguyen Quan¹, Nguyen Hoai Son 2, and Nguyen Quoc uan 2 1 Department of Engineering echnology,

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Proper Orthogonal Decomposition Extensions For Parametric Applications in Transonic Aerodynamics

Proper Orthogonal Decomposition Extensions For Parametric Applications in Transonic Aerodynamics Proper Orthogonal Decomposition Etensions For Parametric Applications in Transonic Aerodnamics T. Bui-Thanh, M. Damodaran Singapore-Massachusetts Institute of Technolog Alliance (SMA) School of Mechanical

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 2, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

k is a product of elementary matrices.

k is a product of elementary matrices. Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation

More information

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name: Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due

More information

Errors Intensive Computation

Errors Intensive Computation Errors Intensive Computation Annalisa Massini - 2015/2016 OVERVIEW Sources of Approimation Before computation modeling empirical measurements previous computations During computation truncation or discretization

More information

CSE 559A: Computer Vision

CSE 559A: Computer Vision CSE 559A: Computer Vision Fall 208: T-R: :30-pm @ Lopata 0 Instructor: Ayan Chakrabarti (ayan@wustl.edu). Course Staff: Zhihao ia, Charlie Wu, Han Liu http://www.cse.wustl.edu/~ayan/courses/cse559a/ Sep

More information

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?) Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from

More information

Kernel-Based Principal Component Analysis (KPCA) and Its Applications. Nonlinear PCA

Kernel-Based Principal Component Analysis (KPCA) and Its Applications. Nonlinear PCA Kernel-Based Principal Component Analysis (KPCA) and Its Applications 4//009 Based on slides originaly from Dr. John Tan 1 Nonlinear PCA Natural phenomena are usually nonlinear and standard PCA is intrinsically

More information

CSE 559A: Computer Vision Tomorrow Zhihao's Office Hours back in Jolley 309: 10:30am-Noon

CSE 559A: Computer Vision Tomorrow Zhihao's Office Hours back in Jolley 309: 10:30am-Noon CSE 559A: Computer Vision ADMINISTRIVIA Tomorrow Zhihao's Office Hours back in Jolley 309: 0:30am-Noon Fall 08: T-R: :30-pm @ Lopata 0 This Friday: Regular Office Hours Net Friday: Recitation for PSET

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

Part 2: NLP Constrained Optimization

Part 2: NLP Constrained Optimization Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz EMAIL: James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow!

More information

12.1 The Extrema of a Function

12.1 The Extrema of a Function . The Etrema of a Function Question : What is the difference between a relative etremum and an absolute etremum? Question : What is a critical point of a function? Question : How do you find the relative

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Chapter 12: Iterative Methods

Chapter 12: Iterative Methods ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method

More information

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis Lecture Recalls of probability theory Massimo Piccardi University of Technology, Sydney,

More information

Introduction to Compact Dynamical Modeling. II.1 Steady State Simulation. Luca Daniel Massachusetts Institute of Technology. dx dt.

Introduction to Compact Dynamical Modeling. II.1 Steady State Simulation. Luca Daniel Massachusetts Institute of Technology. dx dt. Course Outline NS & NIH Introduction to Compact Dynamical Modeling II. Steady State Simulation uca Daniel Massachusetts Institute o Technology Quic Snea Preview I. ssembling Models rom Physical Problems

More information

Use of Design Sensitivity Information in Response Surface and Kriging Metamodels

Use of Design Sensitivity Information in Response Surface and Kriging Metamodels Optimization and Engineering, 2, 469 484, 2001 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Use of Design Sensitivity Information in Response Surface and Kriging Metamodels J. J.

More information

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis .. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make

More information

Lecture 5: Rules of Differentiation. First Order Derivatives

Lecture 5: Rules of Differentiation. First Order Derivatives Lecture 5: Rules of Differentiation First order derivatives Higher order derivatives Partial differentiation Higher order partials Differentials Derivatives of implicit functions Generalized implicit function

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Numerical Integration (Quadrature) Another application for our interpolation tools!

Numerical Integration (Quadrature) Another application for our interpolation tools! Numerical Integration (Quadrature) Another application for our interpolation tools! Integration: Area under a curve Curve = data or function Integrating data Finite number of data points spacing specified

More information

ON CONVERGENCE PROPERTIES OF MESSAGE-PASSING ESTIMATION ALGORITHMS. Justin Dauwels

ON CONVERGENCE PROPERTIES OF MESSAGE-PASSING ESTIMATION ALGORITHMS. Justin Dauwels ON CONVERGENCE PROPERTIES OF MESSAGE-PASSING ESTIMATION ALGORITHMS Justin Dauwels Amari Research Unit, RIKEN Brain Science Institute, Wao-shi, 351-0106, Saitama, Japan email: justin@dauwels.com ABSTRACT

More information

Learning Targets: Standard Form: Quadratic Function. Parabola. Vertex Max/Min. x-coordinate of vertex Axis of symmetry. y-intercept.

Learning Targets: Standard Form: Quadratic Function. Parabola. Vertex Max/Min. x-coordinate of vertex Axis of symmetry. y-intercept. Name: Hour: Algebra A Lesson:.1 Graphing Quadratic Functions Learning Targets: Term Picture/Formula In your own words: Quadratic Function Standard Form: Parabola Verte Ma/Min -coordinate of verte Ais of

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

33A Linear Algebra and Applications: Practice Final Exam - Solutions

33A Linear Algebra and Applications: Practice Final Exam - Solutions 33A Linear Algebra and Applications: Practice Final Eam - Solutions Question Consider a plane V in R 3 with a basis given by v = and v =. Suppose, y are both in V. (a) [3 points] If [ ] B =, find. (b)

More information

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren 1 / 34 Metamodeling ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 1, 2015 2 / 34 1. preliminaries 1.1 motivation 1.2 ordinary least square 1.3 information

More information

Basic Linear Inverse Method Theory - DRAFT NOTES

Basic Linear Inverse Method Theory - DRAFT NOTES Basic Linear Inverse Method Theory - DRAFT NOTES Peter P. Jones 1 1 Centre for Compleity Science, University of Warwick, Coventry CV4 7AL, UK (Dated: 21st June 2012) BASIC LINEAR INVERSE METHOD THEORY

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

A comparison of error subspace Kalman filters

A comparison of error subspace Kalman filters Tellus 2005), 57A, 715 735 Copyright C Blacwell Munsgaard, 2005 Printed in Singapore. All rights reserved TELLUS A comparison of error subspace Kalman filters By LARS NERGER,WOLFGANG HILLER and JENS SCHRÖTER,

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Lecture : Lovász Theta Body. Introduction to hierarchies.

Lecture : Lovász Theta Body. Introduction to hierarchies. Strong Relaations for Discrete Optimization Problems 20-27/05/6 Lecture : Lovász Theta Body. Introduction to hierarchies. Lecturer: Yuri Faenza Scribes: Yuri Faenza Recall: stable sets and perfect graphs

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

Eigenvectors and Eigenvalues 1

Eigenvectors and Eigenvalues 1 Ma 2015 page 1 Eigenvectors and Eigenvalues 1 In this handout, we will eplore eigenvectors and eigenvalues. We will begin with an eploration, then provide some direct eplanation and worked eamples, and

More information

EC5555 Economics Masters Refresher Course in Mathematics September 2013

EC5555 Economics Masters Refresher Course in Mathematics September 2013 EC5555 Economics Masters Reresher Course in Mathematics September 3 Lecture 5 Unconstraine Optimization an Quaratic Forms Francesco Feri We consier the unconstraine optimization or the case o unctions

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Linear Discriminant Functions

Linear Discriminant Functions Linear Discriminant Functions Linear discriminant functions and decision surfaces Definition It is a function that is a linear combination of the components of g() = t + 0 () here is the eight vector and

More information

STUDY ON THE IMPROVED KRIGING-BASED OPTIMIZATION DESIGN METHOD

STUDY ON THE IMPROVED KRIGING-BASED OPTIMIZATION DESIGN METHOD 7 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES STUDY ON THE IMPROVED KRIGING-BASED OPTIMIZATION DESIGN METHOD Song Wenping, Xu Ruifei, Han Zhonghua (National Key Laboratory of Science and Technology

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

9.8 APPLICATIONS OF TAYLOR SERIES EXPLORATORY EXERCISES. Using Taylor Polynomials to Approximate a Sine Value EXAMPLE 8.1

9.8 APPLICATIONS OF TAYLOR SERIES EXPLORATORY EXERCISES. Using Taylor Polynomials to Approximate a Sine Value EXAMPLE 8.1 9-75 SECTION 9.8.. Applications of Taylor Series 677 and f 0) miles/min 3. Predict the location of the plane at time t min. 5. Suppose that an astronaut is at 0, 0) and the moon is represented by a circle

More information

Basics of Multivariate Modelling and Data Analysis

Basics of Multivariate Modelling and Data Analysis Basics of Multivariate Modelling and Data Analysis Kurt-Erik Häggblom 6. Principal component analysis (PCA) 6.1 Overview 6.2 Essentials of PCA 6.3 Numerical calculation of PCs 6.4 Effects of data preprocessing

More information

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications Inverse Eigenvalue Problems: Theory, Algorithms, and Applications Moody T. Chu North Carolina State University Gene H. Golub Stanford University OXPORD UNIVERSITY PRESS List of Acronyms List of Figures

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Covariance to PCA. CS 510 Lecture #8 February 17, 2014 Covariance to PCA CS 510 Lecture 8 February 17, 2014 Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back

More information

Linear Models for Regression. Sargur Srihari

Linear Models for Regression. Sargur Srihari Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

Quadrature for the Finite Free Convolution

Quadrature for the Finite Free Convolution Spectral Graph Theory Lecture 23 Quadrature for the Finite Free Convolution Daniel A. Spielman November 30, 205 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm Sparse Representation and the K-SV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen - Nürnberg April 8 Noise Removal? Our story begins with image

More information

ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS

ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-203-OS-03 Department of Mathematics KTH Royal

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Exact and heuristic minimization of Boolean functions

Exact and heuristic minimization of Boolean functions Eact and heuristic minimization of Boolean functions Quine a McCluskey Method a) Generation of all prime implicants b) Selection of a minimum subset of prime implicants, which will represent the original

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information