Rank-Deficient Nonlinear Least Squares Problems and Subset Selection
|
|
- Deirdre McLaughlin
- 6 years ago
- Views:
Transcription
1 Rank-Deficient Nonlinear Least Squares Problems and Subset Selection Ilse Ipsen North Carolina State University Raleigh, NC, USA Joint work with: Tim Kelley and Scott Pope
2 Motivating Application Overview Modeling cardiovascular systems Extract biomarkers: Nonlinear parameter estimation Nonlinear dependencies among parameters Computation Solution of nonlinear least squares problem by Levenberg-Marquardt trust region algorithm Rank deficient Jacobians Errors in Jacobian evaluation How to regularize the Jacobian? Truncated SVD: NO Column subset selection: YES
3 Modeling Cardiovascular Systems Goal: Identify parameters that regulate blood flow Cardiovascular system = lumped 5-compartment model Blood flow, volume, pressure, resistance, compliance Vvc Veins Cer Veins qacp Arteries Cer Art Vac Cvc pvc Racp pac Cac Rvc qvc Vvs Sys Veins pvs Cvs qmv Rmv Left Ventricle Vlv plv Clv(t) qav Rav qac Rac Sys Art Vas pas Cas qasp Rasp [Pope, Olufsen, Ellwein, Novak, Kelley]
4 Computation System of 5 ODEs with N = 16 parameters Parameter vector p R N y = F(t,y;p) y(0) = y 0 Observations d j at M time points t j M N Nonlinear residual y(t 1,p) d 1 R(p) =. y(t M,p) d M Identify parameters p that minimize difference between measured and computed quantities min p R(p) T R(p)/2
5 Nonlinear Least Squares Problem min p R(p) T R(p)/2 Jacobian J n R (p n ) at current iterate p n Levenberg-Marquardt trust region algorithm For n = 0,1,2... ( ) 1 p n+1 = p n ν n I +Jn T J n J T n R(p n ) ν n = 0 and J n full column rank: Gauss Newton Here: ν n 0 and J n rank deficient
6 Levenberg-Marquardt Algorithm Inside a Levenberg-Marquardt Step: While iterate has not changed Trial step s = ( ν n I +J T n J n ) 1J T n R(p n ) Trial iterate p t = p n +s if p t good enough then p n+1 = p t, ν n+1 keep or decrease ν n else ν n+1 increase ν n Ideally: ν n 0, or at least ν n bounded p n converge to minimizer, or at least stationary point But here: Poor convergence (Levenberg-Marquardt stagnates) Gradient at solution not small Accuracy of solution???
7 Convergence Analysis Near solution manifold Assuming exact arithmetic Nonlinear iterations with rank deficient Jacobians: Ben-Israel 1966, Boggs 1976, Deuflhard & Heindl 1979, Schaback 1985
8 Behavior of Levenberg-Marquardt Iterates Assumptions Initial iterate p 0 close enough to a solution p J(p) Lipschitz continuous R(p ) small but not necessarily zero Then we can show Levenberg-Marquardt parameters ν n remain bounded Iterates p n approach solution manifold If p n converge then they converge to some solution (Cauchy sequence) Still need to show that p n converge
9 Convergence of Levenberg-Marquardt Iterates Model nonlinear dependence among parameters: R(p) = R (B(p)) B : R M R K If K = N then Jacobian J has full column rank Assumptions: Sufficiently close to a solution p R and B uniformly Lipschitz continuously differentiable All K singular values of B uniformly bounded away from 0 All K singular values of R uniformly bounded away from 0 R(b) T R(b)/2 has unique minimizer Then: p n converge to a solution r-linearly
10 Summary: Convergence Analysis Assumptions: Near solution manifold Initial iterate p 0 sufficiently close to a solution p Nonlinear residual R(p ) small but not necessarily zero Dependence among parameters: R(p) = R (B(p)) B : R M R K Jacobians B and R have rank K All Jacobians sufficiently smooth Then: Iterates p n converge to some solution But this assumes exact arithmetic! What happens in finite precision?
11 Finite Precision Issues Computation of trial step Effect of errors in computed Jacobian Regularization of Jacobian: Truncated SVD subset selection Singular vector perturbations: Stewart 1973
12 Computation of Trial Step p n+1 = p n +s Trial step ( J s = νi +J J) T T R Works for ν = 0 and rank deficient J Computed as minimum norm solution to linear least squares problem min x Ax b A = ( ) J νi b = ( ) R 0 Illconditioned or illposed if ν small, J rank deficient
13 Full Rank Jacobian J has full column rank: s = ( νi +J T J ) 1 J T R Condition number κ ν (J) := ( νi +J T J ) 1 J T J J = J +E has full column rank, E ǫ J ( T J) 1 JT s = νi + J R Relative error in trial step s s s κ ν (J) ( 1+ R ) ǫ J s Error in J amplified by conditioning of J and nonlinear residual R
14 Rank Deficient Jacobian: Regularization Exact trial step s = ( νi +J T J ) J T R Truncate SVD of J: Truncated Jacobian J t has singular values σ 1 σ K > σ K+1 = = σ N = 0 Truncated trial step is ( J s t = νi +Jt t) T J T t R Computed as minimum norm solution of min x A t x b A t = ( ) Jt νi b = ( ) R 0 Linear least squares problem still ill-conditioned or ill-posed
15 Truncated SVD: Errors in Jacobian SVD of exact Jacobian: J = UΣV T Nonzero singular values σ 1 σ K > 0 κ ν (J) := σ 1 max σ K σ σ 1 rank(j) = K σ ν +σ 2 SVD of perturbed Jacobian: J+E = Ũ ΣṼT, E F ǫ J Ũ, Ṽ rotations of U, V by angles θ J t truncated SVD of J +E rank( J t ) = K Trial step of truncated perturbed Jacobian ( s t = νi + J t T J ) t J t T R
16 Relative Error for Truncated SVD For ǫ sufficiently small s t s s t κ ν (J) [ ] R 1+(1+2 J tanθ) ǫ+o(ǫ 2 ) J s t tanθ: Accuracy of singular vectors of truncated Jacobian J t Error in J t amplified by Conditioning of J Inaccuracy of singular vectors Nonlinear residual R Trial step from truncated SVD not accurate if J close to matrix of rank K 1: κ ν (J) 1 Singular vectors have low accuracy: tanθ 0 Nonlinear residual R large
17 Alternative Regularization: Subset Selection Choose K very linearly independent columns J 1 from J J = ( J 1 J 2 ) Trial step ( ) 1J ŝ = νi +J1 T T J 1 1 R Computed as solution of min x A 1 x b A 1 = ( ) J1 νi b = ( ) R 0 Linear least squares problem now well-conditioned
18 Subset Selection: Errors in Jacobian J 1 selected by strong RRQR [Gu & Eisenstat 1996] σ j 1+K(N K) σ j (J 1 ) σ j 1 j K Singular values of J 1 close to largest singular values of J Perturbed Jacobian J = J +E rank(j +E) K, E ǫ J Select same columns for J 1 and J 1 : ( J1 J2 ) ( s = νi + J 1 T J ) 1 1 JT 1 R
19 Subset Selection: Relative Error Condition number κ ν (J 1 ) = σ 1 max σ K σ σ 1 σ ν +σ 2 σ K = σ K / 1+K(N K) Relative error in subset selection trial step s ŝ s κ ν (J 1 ) ( 1+ R ) J s ǫ Error in J amplified by conditioning of J 1 and nonlinear residual R Same as full rank bound applied to J 1
20 Numerical Experiments Goal: Design simplest possible setting to reproduce failures from truncated SVD observed in cardiovascular model
21 Numerical Experiments Driven harmonic oscillator ( δ)y +(c 1 +c 2 )y +k y = 2sin(5t) y(0) = y 0, y (0) = y 0 4 parameters p = ( δ c 1 c 2 k ) T Numerical solution ỹ(t j ) from Matlab ode15s Nonlinear residual ỹ(t 1 ) d 1 R(p) =. ỹ(t M ) d M Estimate p by solving nonlinear least squares problem min p R(p) T R(p)/2
22 Numerical Experiments: Assumptions ( δ)y +(c 1 +c 2 )y +k y = 2sin(5t) Highly accurate Jacobians: Compute columns of J from sensitivities y/ p Zero residual: Data d from exact parameters p = ( ) T Initial guess p 0 = ( ) T Singular values of initial Jacobian: One zero singular value by design: Need to recover c 1 +c 2 = 1 R c 1 = R c 2
23 Numerical Experiments: Zero Residual Assumptions for subset selection: Rank K = 3 of Jacobian is known Subset selection applied only to initial Jacobian All Levenberg-Marquardt iterations work with same K columns Parameters corresponding to N K = 1 non-selected columns set to nominal values Exact parameters p = ( ) T Truncated SVD: p = ( ) T Subset selection: p = ( ) T A little more accurate
24 Convergence History: Zero Residual Truncated SVD Gradient Norm Least Squares Error Subset selection Iterations Iterations Subset selection converges faster and slightly more accurate than truncated SVD
25 Numerical Experiments: Non-Zero Residual ( δ)y +(c 1 +c 2 )y +k 0 y = 2sin(5t) Non-zero residual Componentwise relative perturbation of data d by 10 4 Singular values of Jacobian have not changed: Exact parameters p = ( ) T Truncated SVD: p = ( ) T δ completely wrong Subset selection: p = ( ) T Much more accurate
26 Truncated SVD Subset Selection What is really going on?
27 General Least Squares Problems min x Ax b A is M N, M N Singular values σ 1 σ K σ K+1 σ N > 0 Least squares problem with illconditioned matrix Truncated SVD Singular values σ 1 σ K σ K+1 = = σ N = 0 Least squares problem now ill-posed Subset Selection: K columns of A selected by strong RRQR Singular values σ 1 σ K / 1+K(N k) 0 Least squares problem with wellconditioned matrix
28 The Problem with Truncated SVD min x Ax b A has singular values σ 1 σ k σ r > 0 s = A b is minimal norm solution Truncated SVD: min x A t x b A t has singular values σ 1 σ k s t = A tb is minimal norm solution, residual r t = b As t Relative error s t s s t σ 1 σ r r t A s t Small residual does not imply that s t accurate Bound independent of how many singular values truncated
29 Summary Parameter estimation with nonlinear dependences Expressed as nonlinear least squares problem Solved by Levenberg-Marquardt trust region algorithm Rank deficient Jacobians Errors in Jacobian evaluation, non-zero residuals How to regularize Jacobian: Truncated SVD: NO Subset selection: Yes
Subset Selection. Ilse Ipsen. North Carolina State University, USA
Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns
More informationc 2011 Society for Industrial and Applied Mathematics
SIAM J. NUMER. ANAL. Vol. 49, No. 3, pp. 1244 1266 c 2011 Society for Industrial and Applied Mathematics RANK-DEFICIENT NONLINEAR LEAST SQUARES PROBLEMS AND SUBSET SELECTION I. C. F. IPSEN, C. T. KELLEY,
More informationESTIMATION AND IDENTIFICATION OF PARAMETERS IN A LUMPED CEREBROVASCULAR MODEL
MATHEMATICAL BIOSCIENCES doi:10.3934/mbe.2009.6.93 AND ENGINEERING Volume 6, Number 1, January 2009 pp. 93 115 ESTIMATION AND IDENTIFICATION OF PARAMETERS IN A LUMPED CEREBROVASCULAR MODEL Scott R. Pope
More informationNon-polynomial Least-squares fitting
Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts
More informationThe Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof:
Applied Math 205 Homework 1 now posted. Due 5 PM on September 26. Last time: piecewise polynomial interpolation, least-squares fitting Today: least-squares, nonlinear least-squares The Normal Equations
More informationSubset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale
Subset Selection Deterministic vs. Randomized Ilse Ipsen North Carolina State University Joint work with: Stan Eisenstat, Yale Mary Beth Broadbent, Martin Brown, Kevin Penner Subset Selection Given: real
More informationLeast-Squares Fitting of Model Parameters to Experimental Data
Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More information13. Nonlinear least squares
L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationLeast squares problems
Least squares problems How to state and solve them, then evaluate their solutions Stéphane Mottelet Université de Technologie de Compiègne 30 septembre 2016 Stéphane Mottelet (UTC) Least squares 1 / 55
More informationApplied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit I: Data Fitting Lecturer: Dr. David Knezevic Unit I: Data Fitting Chapter I.4: Nonlinear Least Squares 2 / 25 Nonlinear Least Squares So far we have looked at finding a best
More informationDiscrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1
Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical
More informationTrust-region methods for rectangular systems of nonlinear equations
Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini
More information2 Nonlinear least squares algorithms
1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationSUBSET SELECTION ALGORITHMS: RANDOMIZED VS. DETERMINISTIC
SUBSET SELECTION ALGORITHMS: RANDOMIZED VS. DETERMINISTIC MARY E. BROADBENT, MARTIN BROWN, AND KEVIN PENNER Advisors: Ilse Ipsen 1 and Rizwana Rehman 2 Abstract. Subset selection is a method for selecting
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationWhy the QR Factorization can be more Accurate than the SVD
Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for
More informationx 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.
Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written
More informationNumerical solution of Least Squares Problems 1/32
Numerical solution of Least Squares Problems 1/32 Linear Least Squares Problems Suppose that we have a matrix A of the size m n and the vector b of the size m 1. The linear least square problem is to find
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationGLOBALLY CONVERGENT GAUSS-NEWTON METHODS
GLOBALLY CONVERGENT GAUSS-NEWTON METHODS by C. Fraley TECHNICAL REPORT No. 200 March 1991 Department of Statistics, GN-22 University of Washington Seattle, Washington 98195 USA Globally Convergent Gauss-Newton
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationInverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna
Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationNumerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank
Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial
More informationLeast squares: the big idea
Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax
More informationInverse Singular Value Problems
Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139 140 Lecture 8 IEP versus ISVP Inverse
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationRank revealing factorizations, and low rank approximations
Rank revealing factorizations, and low rank approximations L. Grigori Inria Paris, UPMC January 2018 Plan Low rank matrix approximation Rank revealing QR factorization LU CRTP: Truncated LU factorization
More informationPose estimation from point and line correspondences
Pose estimation from point and line correspondences Giorgio Panin October 17, 008 1 Problem formulation Estimate (in a LSE sense) the pose of an object from N correspondences between known object points
More informationHigh Performance Nonlinear Solvers
What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described
More informationKKT Conditions for Rank-Deficient Nonlinear Least-Square Problems with Rank-Deficient Nonlinear Constraints1
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 100, No. 1. pp. 145-160, JANUARY 1999 KKT Conditions for Rank-Deficient Nonlinear Least-Square Problems with Rank-Deficient Nonlinear Constraints1
More informationIterative Matching Pursuit and its Applications in Adaptive Time-Frequency Analysis
Iterative Matching Pursuit and its Applications in Adaptive Time-Frequency Analysis Zuoqiang Shi Mathematical Sciences Center, Tsinghua University Joint wor with Prof. Thomas Y. Hou and Sparsity, Jan 9,
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationSensitivity and Reliability Analysis of Nonlinear Frame Structures
Sensitivity and Reliability Analysis of Nonlinear Frame Structures Michael H. Scott Associate Professor School of Civil and Construction Engineering Applied Mathematics and Computation Seminar April 8,
More informationLocal convergence analysis of the Levenberg-Marquardt framework for nonzero-residue nonlinear least-squares problems under an error bound condition
Local convergence analysis of the Levenberg-Marquardt framework for nonzero-residue nonlinear least-squares problems under an error bound condition Roger Behling 1, Douglas S. Gonçalves 2, and Sandra A.
More informationError Bounds for Iterative Refinement in Three Precisions
Error Bounds for Iterative Refinement in Three Precisions Erin C. Carson, New York University Nicholas J. Higham, University of Manchester SIAM Annual Meeting Portland, Oregon July 13, 018 Hardware Support
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationAugmented Reality VU numerical optimization algorithms. Prof. Vincent Lepetit
Augmented Reality VU numerical optimization algorithms Prof. Vincent Lepetit P3P: can exploit only 3 correspondences; DLT: difficult to exploit the knowledge of the internal parameters correctly, and the
More informationTikhonov Regularization in General Form 8.1
Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:
1 Trouble points Notes for 2016-09-28 At a high level, there are two pieces to solving a least squares problem: 1. Project b onto the span of A. 2. Solve a linear system so that Ax equals the projected
More informationInterior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy
InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia
More informationInverse Problems and Optimal Design in Electricity and Magnetism
Inverse Problems and Optimal Design in Electricity and Magnetism P. Neittaanmäki Department of Mathematics, University of Jyväskylä M. Rudnicki Institute of Electrical Engineering, Warsaw and A. Savini
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationLevenberg-Marquardt Method for Solving the Inverse Heat Transfer Problems
Journal of mathematics and computer science 13 (2014), 300-310 Levenberg-Marquardt Method for Solving the Inverse Heat Transfer Problems Nasibeh Asa Golsorkhi, Hojat Ahsani Tehrani Department of mathematics,
More informationMODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS
MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology
More informationCOMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?
COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve
More informationNumerical Analysis Comprehensive Exam Questions
Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order
More informationOPER 627: Nonlinear Optimization Lecture 9: Trust-region methods
OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods Department of Statistical Sciences and Operations Research Virginia Commonwealth University Sept 25, 2013 (Lecture 9) Nonlinear Optimization
More informationMatrix Derivatives and Descent Optimization Methods
Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationON NUMERICAL RECOVERY METHODS FOR THE INVERSE PROBLEM
ON NUMERICAL RECOVERY METHODS FOR THE INVERSE PROBLEM GEORGE TUCKER, SAM WHITTLE, AND TING-YOU WANG Abstract. In this paper, we present the approaches we took to recover the conductances of an electrical
More information, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are
Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues
More informationNewton s Method and Efficient, Robust Variants
Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science
More informationReceding horizon controller for the baroreceptor loop in a model for the cardiovascular system
SpezialForschungsBereich F 32 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Receding horizon controller for the baroreceptor loop in a model for the cardiovascular
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationNonlinear Regression. Chapter 2 of Bates and Watts. Dave Campbell 2009
Nonlinear Regression Chapter 2 of Bates and Watts Dave Campbell 2009 So far we ve considered linear models Here the expectation surface is a plane spanning a subspace of the observation space. Our expectation
More informationRobotics & Automation. Lecture 17. Manipulability Ellipsoid, Singularities of Serial Arm. John T. Wen. October 14, 2008
Robotics & Automation Lecture 17 Manipulability Ellipsoid, Singularities of Serial Arm John T. Wen October 14, 2008 Jacobian Singularity rank(j) = dimension of manipulability ellipsoid = # of independent
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More information1 Error analysis for linear systems
Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error
More informationMA 580; Iterative Methods for Nonlinear Equations
MA 580; Iterative Methods for Nonlinear Equations C. T. Kelley NC State University tim kelley@ncsu.edu Version of November 2, 2016 Read Chapter 4 sections 5.1--5.4 of the Red book NCSU, Fall 2016 Part
More informationPose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!
Pose Tracking II! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 12! stanford.edu/class/ee267/!! WARNING! this class will be dense! will learn how to use nonlinear optimization
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More information1. Introduction. We consider linear discrete ill-posed problems of the form
AN EXTRAPOLATED TSVD METHOD FOR LINEAR DISCRETE ILL-POSED PROBLEMS WITH KRONECKER STRUCTURE A. BOUHAMIDI, K. JBILOU, L. REICHEL, AND H. SADOK Abstract. This paper describes a new numerical method for the
More informationData Fitting and Uncertainty
TiloStrutz Data Fitting and Uncertainty A practical introduction to weighted least squares and beyond With 124 figures, 23 tables and 71 test questions and examples VIEWEG+ TEUBNER IX Contents I Framework
More informationJustify all your answers and write down all important steps. Unsupported answers will be disregarded.
Numerical Analysis FMN011 10058 The exam lasts 4 hours and has 13 questions. A minimum of 35 points out of the total 70 are required to get a passing grade. These points will be added to those you obtained
More informationnonlinear simultaneous equations of type (1)
Module 5 : Solving Nonlinear Algebraic Equations Section 1 : Introduction 1 Introduction Consider set of nonlinear simultaneous equations of type -------(1) -------(2) where and represents a function vector.
More informationSolving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods
Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén
More informationODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class
ODE Homework Due Wed. 9 August 2009; At the beginning of the class. (a) Solve Lẏ + Ry = E sin(ωt) with y(0) = k () L, R, E, ω are positive constants. (b) What is the limit of the solution as ω 0? (c) Is
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements
More informationInexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty
Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for
More informationLevenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation
Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation E. Bergou S. Gratton L. N. Vicente June 26, 204 Abstract The Levenberg-Marquardt
More informationRecovering any low-rank matrix, provably
Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix
More informationNotes on Conditioning
Notes on Conditioning Robert A. van de Geijn The University of Texas Austin, TX 7872 October 6, 204 NOTE: I have not thoroughly proof-read these notes!!! Motivation Correctness in the presence of error
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of
More information1.1.1 Introduction. SLAC-PUB June 2009
LOCO with constraints and improved fitting technique Xiaobiao Huang, James Safranek Mail to: xiahuang@slac.stanford.edu Stanford Linear Accelerator Center, Menlo Park, CA 9405, USA Greg Portmann Lawrence
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationSTOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1
53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationProblem set 7 Math 207A, Fall 2011 Solutions
Problem set 7 Math 207A, Fall 2011 s 1. Classify the equilibrium (x, y) = (0, 0) of the system x t = x, y t = y + x 2. Is the equilibrium hyperbolic? Find an equation for the trajectories in (x, y)- phase
More informationSparsity in system identification and data-driven control
1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky This signal is not sparse in the "time domain" 2 / 40 But it is sparse in the "frequency domain" (it is weighted sum of six
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More information