Computing the Geodesic Interpolating Spline
|
|
- Daniel Newman
- 5 years ago
- Views:
Transcription
1 Computing the Geodesic Interpolating Spline Anna Mills 1, Tony Shardlow 1, and Stephen Marsland 2 1 The University of Manchester, UK 2 Massey University, NZ Abstract. We examine non-rigid image registration by knotpoint matching. We consider registering two images, each with a set of knotpoints marked, where one of the images is to be registered to the other by a nonlinear warp so that the knotpoints on the template image are exactly aligned with the corresponding knotpoints on the reference image. We explore two approaches for computing the registration by the Geodesic Interpolating Spline. First, we describe a method which exploits the structure of the problem in order to permit efficient optimization and second, we outline an approach using the framework of classical mechanics. 1 Introduction and Formulation of problem We develop methods for computation of the Geodesic Interpolating Spline (GIS) as described by Marsland and Twining in [1]. A comprehensive survey of registration methods is given in [2]. The GIS was developed from the thin-plate spline [3] and clamped-plate spline [4] to provide diffeomorphic mapping between images so that no folds or tears are introduced to the images and no information is lost from the image being mapped. We formulate the GIS as a minimization problem as follows. Minimize l(q i (t), v(t, x)) = 1 1 Lv(t, x) 2 IR ddx dt, (1) 2 over deformation fields, v(t, x) IR d and paths, q i (t) B for i = 1,..., n c, where B IR d is the unit-ball domain of the image and L = 2 approximating the Willmore energy. We have constraints for t 1, i = 1,..., n c, B dq i dt = v(t, q i(t)), q i () = P i, q i (1) = Q i. (2) Using techniques in [5], we can represent the minimizing vector field as v(t, x) = α i (t)g(q i (t), x), (3) i=1 where α i (t), q i (t) are respectively multipliers and knotpoint positions at time t for a set of n c knotpoints and where G(x, y) is the Green s function derived by
2 Boggio [6]. We experiment numerically with the 2-dimensional Green s function G(x, y) = x y 2 ln x y 2 for the biharmonic operator with zero Dirichlet and Neumann boundary conditions on B. In this way, we can derive min i,j=1 α i α j G(q i, q j )dt, such that dq nc i dt = j=1 α j G(q i, q j ), (4) q i () = P i, q i (1) = Q i, i = 1,..., n c. (5) We explore two methods for the minimization. In Sec. 2, we examine the structure of the discretized version of (4), and use an optimization method exploiting this structure. In Sec. 3, we reformulate the problem in a Hamiltonian framework to compute the GIS. In Sec. 4, we test the methods on brain images with knotpoints marked by clinicians. In Sec. 5, we summarize our results. 2 Numerical Optimization Exploiting Partial Separability To find the minimizer of (4), we discretize in time to achieve a finite dimensional system. This yields a constrained optimization problem with a clear structure. The numerical optimization routine, Lancelot B, from the Galahad suite [7], uses group partial separable structure to express the dependence between different variables and make the structure of the resulting Hessian matrices clear, improving the performance on large scale problems. The discretization of the GIS is partially separable, because of the dependence on time. We move to a discretized version of the problem using time step t = 1/N. We use α n i α i (n t), n =,..., N 1 and q n i q i (n t), n =,..., N to give the problem in the following form: Minimize over α n i, qn i N(q n+1 i l{α n i, qn i } = 1 2 i = 1,..., n c such that N 1 i,j=1 n= α n i α n j G(qn i, qn j ) (6) nc q n i ) = α n j G(qn i, qn j ), n =,..., N 1, i = 1,..., n c (7) j=1 with conditions q i = P i, and q N i = Q i, i = 1,..., n c. As detailed in [7], Lancelot B uses an iterative method. Outer iterations minimize augmented Lagrangian merit functions, and inner iterations minimize a quadratic model of the merit function. Lancelot B minimizes a function where the objective function and constraints can be described in terms of a set of group functions and element functions, where each element function involves only a small subset of the minimization variables. Specifically, Lancelot B solves problems with objective functions of the form f(x) = w g i g i wij e e j(x e j ), x = (x 1, x 2,..., x n ). (8) i Γ j Ei
3 In the above, we have Γ, a set of indices of group functions g i and we have E i, a set of nonlinear element functions e j and group and element weight parameters, respectively w g i and wij e. The constraints must be of the form c i (x) = w g i g i wije e j (x e j) + a N i x =, (9) j Ei for i in the set of indices of constraints Γ c. Examining our objective function, we see that there is a natural division into N groups, each group being given by, for the n th group w g i g i wnje e j (x e j) = 1 (α n i G(q n i, q n j ))α n j. (1) 2 j E n i,j=1 In this notation, x e j is the vector containing the optimization variables (qn i, αn i ), for i = 1,..., n c. Similarly, the 2n c N velocity constraints in (7) can be characterized by 2n c N groups. We require that the start and end points of each control point path coincide with the landmarks on, respectively, the floating and reference images. This gives 4n c constraints of the form = (q 1 i P i )w, = (q N+1 i Q i )w, i = 1,..., n c. (11) Hence we have 4n c groups characterizing the landmark constraints, each group being weighted by some w 1. In total, we have N + 2n c N + 4n c groups characterizing the problem. We see the block diagonal sparsity structure of the Hessian for the augmented Lagrangian function for our problem in Fig. 2 where the Hessian for a problem involving 5 time steps and 8 knotpoints is shown, calculated using a numerical scheme on the augmented Lagrangian function as used in Lancelot B, L A (x, λ; µ) = f(x) λ i c i (x) + 1 c 2 i (x), (12) 2µ i Γ c i Γ c where µ is the penalty parameter and λ i for i Γ c are Lagrangian multipliers. There are 5 time blocks on the main diagonal, each with coupling to the adjacent time blocks. There is a banded off-diagonal structure, the two bands of A being due to the constraints being divided into x-dimension and y-dimension constraints. Lancelot B uses Newton methods, which require Hessian approximations. The routine exploits group partial separability of the problem to make calculation and storage of the Hessian approximations more efficient. 3 Classical Mechanics Approach We present a novel formulation of the GIS for image registration as a problem in Hamiltonian dynamics. The GIS problem is given by the minimization
4 Fig. 1. The sparsity structure of the Hessian 15 2 problem (4). We can treat this as a Lagrangian by setting L(q, q) = 1 2 i,j=1 α i α j G(q i, q j ), (13) where q = (q 1,..., q nc ), and q = ( dq1 dt,..., dqnc dt ) represent position and velocity, respectively. Following Arnold [8], we see that the Hamiltonian of the system, H(p, q) = p q L(q, q), is the Legendre transform of the Lagrangian function as a function of the velocity, q, where p is the generalized momentum, L q, and so q = H α, H p = q, α = H q, H q = L q. (14) Construct a matrix A IR nc nc so that A ij = G(q i, q j ), i, j = 1,..., n c, and a matrix G IR dnc dnc as G = A I, (where I is the d-dimensional identity matrix). We can use (4) and define a vector α = [α 1, α 2,..., α n c ] to derive dq dt = Gα. and hence we see that the generalized momentum is given by L q = G 1 Gα = α. Substituting this expression into the standard Euler- Lagrange equations [8] gives L/ q = dα/dt. Hence, with (14), we have the coupled system of Hamiltonian equations [ ] [ ] q() P = = Y, (15) α() A where P is the vector of initial knotpoint positions in (4) and A is the initial vector of generalized momentum. Substituting (13) and the derivatives from (15) into the expression for the Hamiltonian of the system gives H(α, q) = α q L(q, q) = 1 2 i,j=1 α i α jg(q i, q j ), (16) so that H is now a function of α and q. We have shown that the solutions, q i (t) and α i (t) of (4) are solutions of (15). We solve the nonlinear system of equations
5 Φ(A; P ) = Q for A as a shooting problem, where Φ(A; P ) := q(1), the position component of the solution of the Hamiltonian system d dt q i = j=1 α j G(q i, q j ), with initial conditions given in (15). d dt α i = α i α j G(q i, q j ), i = 1,..., n c, q j j=1 (17) Numerical Implementation To solve (17), we discretize in time. We choose to discretize using the Forward Euler method. Experiments with symplectic methods have shown no advantage for this problem, principally because it is a boundary value problem where long time simulations are not of interest, and no suitable explicit symplectic integrators are available. Using the notation q n i q i (n t), α i(n t), n =,..., N, t = 1/N, we have α n i [ ] q n+1 = α n+1 [ q n α n ] + t H(q n, α n ) α H(qn, α n ) q [ ] P, = A [ q α ] = Y. (18) We wish to examine the variation with respect to the initial momentum, A in order to provide Jacobians for the nonlinear solver. The initial positions, P remain fixed. Using the Forward Euler scheme for some function f, we have X n+1 = X n + tf(x n ) with initial condition X = Y. Differentiating with respect to A gives dx n+1 da = dxn da + ) tdf(xn dx n dx n da, dx da = [, I], (19) where I is the 2n c 2n c identity matrix. Let J n be the Jacobian and solve numerically a coupled system of equations J n+1 = J n + t df(xn ) dx n J n, X n+1 = X n + tf(x n ) (2) with initial conditions J = [, I], X = Y. f(x) = H q H α, X = [ q α ], df(x n ) dx n = In our problem, we have 2 H 2 H q α α 2 2 H q 2 2 H α q. (21) The entries of the Jacobian in (21) can be calculated explicitly. The analytic calculation of the Jacobian permits efficient solution of the nonlinear equation Φ(A; P ) = Q using the NAG nonlinear solver nag_nlin_sys_sol [9]. 4 Comparison of Techniques Numerical Optimization Approach The inner iterations of the optimization method of Lancelot B use a minimization of a quadratic model function for which an
6 approximate minimum is found. This leads to a model reduction by solving one or more quadratic minimization problems, requiring a solution of a sequence of linear systems. The Lancelot optimization package allows a choice of linear solver from 12 available options as described in [7]. The tests use 1 time steps 1 Spread Points 1 Close Points 2 Spread Points 2 Close Points Fig. 2. Test cases for testing the linear solvers showing P i as circles and Q i as points on four selected test cases, as illustrated in Fig. 2, two comprising knotpoints taken from adjacent points in a data-set of 123 knotpoints hand-annotated on an MRI brain image, and two comprising points taken equally spaced throughout the data-set. We see the results of the experiment in Table 1. In general, the problems with the points closer together in the domain are hardest to solve. However, the method using the Munksgaard preconditioner, for instance, performs better with 1 close points than with 1 points spread through the domain. The expanding band preconditioner performs the best over the four test cases, but still shows unexpected behaviour in the 2 point tests. It is clear from these experiments that it is difficult to predict how an individual linear solver will perform on a particular problem. None of the linear solvers resulted in convergence for the case involving all of the 123 knotpoints. Solver 1 Spread 1 Close 2 Spread 2 Close Total Conjugate Gradient Diagonal Band Expanding Band Munksgaard Lin-More Multifrontal Table 1. Time taken to converge (in seconds) In order to understand this behaviour better, we explore the change in condition number of the interpolation matrix with respect to the minimum separation between knotpoints. First, we examine the interpolation matrix for the clampedplate spline. Computing the clamped-plate spline requires solving a linear system involving an interpolation matrix, G, constructed of biharmonic Green s func-
7 tions in the same manner as that in which we constructed matrix A in Sec. 3, namely G ij = G(q i, q j ), where G(, ) denotes the d-dimensional biharmonic Green s function and q i, q j are d-dimensional knotpoint position vectors. This interpolation matrix is also a key feature of the Hessian of the augmented Lagrangian, as discussed in Sec. 2. In Fig. 3, we see the change in condition number for the interpolation matrix for two parallel knotpoint paths. From the literature, [1], we expect the condition number in the 2-norm, κ 2 (G) of the interpolation matrix to vary with the minimum separation of the knotpoints in a manner bounded above by min i,j q i q j β, β IR +. Accordingly, for comparison, we calculate and plot α min i,j q i q j β. We suspect the poor performance of the linear solvers in the test cases is due to very large condition numbers for small separations. 3.5 x 14 3 condn numbers.12322q Cond Number 2 Fig. 3. Condition numbers for knotpoints with decreasing separation Distance Between Paths x 1 3 Classical Mechanics Approach Experiments show that the Hamiltonian implementation can solve all of the test cases illustrated in Fig. 2 in less than one second, whereas the Lancelot B implementation takes over 18 seconds to solve the test cases. We see the effect of an increase in the number of time steps in Fig. 4, comparing results using a numerical Jacobian with those using a true Jacobian. We see that the performance of the method using the true Jacobian is much superior to that using a numerical Jacobian, both in terms of function evaluations and of time. These tests used the first 6 knot points of the 123 point set. In Fig. 4, we see the effect on performance of the Hamiltonian implementation under an increase in the number of knotpoints, both with a numerical Jacobian and with a user-supplied analytic Jacobian, as described above. The knotpoints are taken as consecutive excerpts from the same set of 123 points as for the test sets above and 2 time steps are used. The speed of convergence of the method improves by a factor of approximately 4 when there is a user-supplied Jacobian. Notice that solving for the full 123 point set takes less than 4 seconds using this method. 5 Conclusions The Galahad implementation was developed by the authors as an improvement to Marsland and Twining s original MATLAB method [1] for calculating the GIS and showed significant improvement over the MATLAB implementation.
8 Time (secs) Increasing the Number of Time Steps time fals time true Number of Time Steps Time (secs) Increasing the Number of Knotpoints Number of Knotpoints Fig. 4. Varying the number of time steps (l) and knotpoints (r) for the Hamiltonian The Hamiltonian method shows an impressive improvement over both of these methods. We believe that the Galahad method shows disappointing performance due to the lack of a preconditioner suitable for the ill-conditioning of the interpolation matrix. The Hamiltonian method for computing the GIS dramatically outperforms the Lancelot B implementation over the test set of real data. It is clear that exact Jacobians should be supplied to the Hamiltonian implementation to give efficient performance. We see from the experiments carried out that, with exact Jacobians provided, the performance of the Hamiltonian method is superior to the performance of previous methods. References 1. Marsland, S., Twining, C.: Measuring geodesic distances on the space of bounded diffeomorphisms. In: British Machine Vision Conference (BMVC). (22) 2. Modersitzki, J.: Numerical methods for image registration. Oxford University Press, New York (24) 3. Camion, V., Younes, L.: Geodesic interpolating splines. Volume 2134 of Energy Minimization Methods in Computer Vision and Pattern Recognition, Lecture notes in Computer Science. (21) Marsland, S., Twining, C.J.: Constructing diffeomorphic representations for the groupwise analysis of nonrigid registrations of medical images. IEEE Trans. Med. Imaging 23 (24) Cheney, W., Light, W.: A course in approximation theory. Brooks/Cole (1999) 6. Boggio, T.: Sulle funzioni di green d ordine m. Circolo Matematico di Palermo 2 (195) Gould, N.I.M., Orban, D., Toint, P.L.: Galahad, a library of thread-safe fortran 9 packages for large-scale nonlinear optimization. ACM Trans. Math. Softw. 29 (23) Arnold, V.I.: Mathematical methods of classical mechanics. 2 edn. Volume 6 of Graduate Texts in Mathematics. Springer Verlag, New York (1989) 58 pages. 9. Numerical Algorithms Group (NAG Manual) 1. Schaback, R.: Error estimates and condition numbers for radial basis function interpolation. Advances in Computational Mathematics 3 (1995)
MS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationOn Lagrange multipliers of trust region subproblems
On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline
More informationREPORTS IN INFORMATICS
REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45
Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more
More informationBayesian Principal Geodesic Analysis in Diffeomorphic Image Registration
Bayesian Principal Geodesic Analysis in Diffeomorphic Image Registration Miaomiao Zhang and P. Thomas Fletcher School of Computing, University of Utah, Salt Lake City, USA Abstract. Computing a concise
More informationImplicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks
Implicitly and Explicitly Constrained Optimization Problems for Training of Recurrent Neural Networks Carl-Johan Thore Linköping University - Division of Mechanics 581 83 Linköping - Sweden Abstract. Training
More informationMath background. Physics. Simulation. Related phenomena. Frontiers in graphics. Rigid fluids
Fluid dynamics Math background Physics Simulation Related phenomena Frontiers in graphics Rigid fluids Fields Domain Ω R2 Scalar field f :Ω R Vector field f : Ω R2 Types of derivatives Derivatives measure
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationUncertainty Quantification for LDDMM Using a Low-rank Hessian Approximation
Uncertainty Quantification for LDDMM Using a Low-rank Hessian Approximation Xiao Yang, Marc Niethammer, Department of Computer Science and Biomedical Research Imaging Center University of North Carolina
More informationGeneralized Newton-Type Method for Energy Formulations in Image Processing
Generalized Newton-Type Method for Energy Formulations in Image Processing Leah Bar and Guillermo Sapiro Department of Electrical and Computer Engineering University of Minnesota Outline Optimization in
More informationMorphometry. John Ashburner. Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK.
Morphometry John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Morphometry is defined as: Measurement of the form of organisms or of their parts. The American Heritage
More informationFinal Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University
Final Examination CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University The exam runs for 3 hours. The exam contains eight problems. You must complete the first
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationOn Lagrange multipliers of trust-region subproblems
On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationLectures 9-10: Polynomial and piecewise polynomial interpolation
Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j
More informationCubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines
Cubic Splines MATH 375 J. Robert Buchanan Department of Mathematics Fall 2006 Introduction Given data {(x 0, f(x 0 )), (x 1, f(x 1 )),...,(x n, f(x n ))} which we wish to interpolate using a polynomial...
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationExam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20
Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationReview for Exam 2 Ben Wang and Mark Styczynski
Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note
More informationNumerical Algorithms as Dynamical Systems
A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationHamiltonian flow in phase space and Liouville s theorem (Lecture 5)
Hamiltonian flow in phase space and Liouville s theorem (Lecture 5) January 26, 2016 90/441 Lecture outline We will discuss the Hamiltonian flow in the phase space. This flow represents a time dependent
More informationConstrained shape spaces of infinite dimension
Constrained shape spaces of infinite dimension Sylvain Arguillère, Emmanuel Trélat (Paris 6), Alain Trouvé (ENS Cachan), Laurent May 2013 Goal: Adding constraints to shapes in order to better fit the modeled
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationHarmonic Polynomials and Dirichlet-Type Problems. 1. Derivatives of x 2 n
Harmonic Polynomials and Dirichlet-Type Problems Sheldon Axler and Wade Ramey 30 May 1995 Abstract. We take a new approach to harmonic polynomials via differentiation. Surprisingly powerful results about
More informationPhysics 5153 Classical Mechanics. Canonical Transformations-1
1 Introduction Physics 5153 Classical Mechanics Canonical Transformations The choice of generalized coordinates used to describe a physical system is completely arbitrary, but the Lagrangian is invariant
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationA Hamiltonian Particle Method for Diffeomorphic Image Registration
A Hamiltonian Particle Method for Diffeomorphic Image Registration Stephen Marsland and Robert McLachlan Massey University, Private Bag 11-222 Palmerston North, New Zealand {s.r.marsland,r.mchlachlan}@massey.ac.nz
More informationCP1 REVISION LECTURE 3 INTRODUCTION TO CLASSICAL MECHANICS. Prof. N. Harnew University of Oxford TT 2017
CP1 REVISION LECTURE 3 INTRODUCTION TO CLASSICAL MECHANICS Prof. N. Harnew University of Oxford TT 2017 1 OUTLINE : CP1 REVISION LECTURE 3 : INTRODUCTION TO CLASSICAL MECHANICS 1. Angular velocity and
More informationNearest Correlation Matrix
Nearest Correlation Matrix The NAG Library has a range of functionality in the area of computing the nearest correlation matrix. In this article we take a look at nearest correlation matrix problems, giving
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationReducing round-off errors in symmetric multistep methods
Reducing round-off errors in symmetric multistep methods Paola Console a, Ernst Hairer a a Section de Mathématiques, Université de Genève, 2-4 rue du Lièvre, CH-1211 Genève 4, Switzerland. (Paola.Console@unige.ch,
More informationConvex Quadratic Approximation
Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,
More informationII. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES
II. DIFFERENTIABLE MANIFOLDS Washington Mio Anuj Srivastava and Xiuwen Liu (Illustrations by D. Badlyans) CENTER FOR APPLIED VISION AND IMAGING SCIENCES Florida State University WHY MANIFOLDS? Non-linearity
More informationNonlinear Equations and Continuous Optimization
Nonlinear Equations and Continuous Optimization Sanzheng Qiao Department of Computing and Software McMaster University March, 2014 Outline 1 Introduction 2 Bisection Method 3 Newton s Method 4 Systems
More informationA Solution Method for Semidefinite Variational Inequality with Coupled Constraints
Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled
More informationLecture 8: Fast Linear Solvers (Part 7)
Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi
More informationEULER-LAGRANGE TO HAMILTON. The goal of these notes is to give one way of getting from the Euler-Lagrange equations to Hamilton s equations.
EULER-LAGRANGE TO HAMILTON LANCE D. DRAGER The goal of these notes is to give one way of getting from the Euler-Lagrange equations to Hamilton s equations. 1. Euler-Lagrange to Hamilton We will often write
More informationFisher Information in Gaussian Graphical Models
Fisher Information in Gaussian Graphical Models Jason K. Johnson September 21, 2006 Abstract This note summarizes various derivations, formulas and computational algorithms relevant to the Fisher information
More informationShort title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic
Short title: Total FETI Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ-70833 Ostrava, Czech Republic mail: zdenek.dostal@vsb.cz fax +420 596 919 597 phone
More informationNUMERICAL MATHEMATICS AND COMPUTING
NUMERICAL MATHEMATICS AND COMPUTING Fourth Edition Ward Cheney David Kincaid The University of Texas at Austin 9 Brooks/Cole Publishing Company I(T)P An International Thomson Publishing Company Pacific
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationTRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS
TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationA Trust-Funnel Algorithm for Nonlinear Programming
for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton
More informationCIND Pre-Processing Pipeline For Diffusion Tensor Imaging. Overview
CIND Pre-Processing Pipeline For Diffusion Tensor Imaging Overview The preprocessing pipeline of the Center for Imaging of Neurodegenerative Diseases (CIND) prepares diffusion weighted images (DWI) and
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationLagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.
Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationNumerical Analysis of Electromagnetic Fields
Pei-bai Zhou Numerical Analysis of Electromagnetic Fields With 157 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest Contents Part 1 Universal Concepts
More informationLecture 5 : Projections
Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization
More informationActive Appearances. Statistical Appearance Models
Active Appearances The material following is based on T.F. Cootes, G.J. Edwards, and C.J. Taylor, Active Appearance Models, Proc. Fifth European Conf. Computer Vision, H. Burkhardt and B. Neumann, eds.,
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,
More informationThe Euler-Lagrange Equation for Interpolating Sequence of Landmark Datasets
The Euler-Lagrange Equation for Interpolating Sequence of Landmark Datasets Mirza Faisal Beg 1, Michael Miller 1, Alain Trouvé 2, and Laurent Younes 3 1 Center for Imaging Science & Department of Biomedical
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationMultivariate models of inter-subject anatomical variability
Multivariate models of inter-subject anatomical variability Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology, 12 Queen Square, London WC1N 3BG, UK. Prediction Binary Classification Curse
More informationComputational Geometric Uncertainty Propagation for Hamiltonian Systems on a Lie Group
Computational Geometric Uncertainty Propagation for Hamiltonian Systems on a Lie Group Melvin Leok Mathematics, University of California, San Diego Foundations of Dynamics Session, CDS@20 Workshop Caltech,
More informationPENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39
PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University
More informationMultivariate Statistical Analysis of Deformation Momenta Relating Anatomical Shape to Neuropsychological Measures
Multivariate Statistical Analysis of Deformation Momenta Relating Anatomical Shape to Neuropsychological Measures Nikhil Singh, Tom Fletcher, Sam Preston, Linh Ha, J. Stephen Marron, Michael Wiener, and
More informationA Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems
A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationThe Darcy-Weisbach Jacobian and avoiding zero flow failures in the Global Gradient Algorithm for the water network equations
The Darcy-Weisbach Jacobian and avoiding zero flow failures in the Global Gradient Algorithm for the water network equations by Elhay, S. and A.R. Simpson World Environmental & Water Resources Congress
More informationMetamorphic Geodesic Regression
Metamorphic Geodesic Regression Yi Hong, Sarang Joshi 3, Mar Sanchez 4, Martin Styner, and Marc Niethammer,2 UNC-Chapel Hill/ 2 BRIC, 3 University of Utah, 4 Emory University Abstract. We propose a metamorphic
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationSketchy Notes on Lagrangian and Hamiltonian Mechanics
Sketchy Notes on Lagrangian and Hamiltonian Mechanics Robert Jones Generalized Coordinates Suppose we have some physical system, like a free particle, a pendulum suspended from another pendulum, or a field
More informationD03PEF NAG Fortran Library Routine Document
D03 Partial Differential Equations D03PEF NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More information1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx
PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationA Trust-region-based Sequential Quadratic Programming Algorithm
Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version
More informationEngg. Math. II (Unit-IV) Numerical Analysis
Dr. Satish Shukla of 33 Engg. Math. II (Unit-IV) Numerical Analysis Syllabus. Interpolation and Curve Fitting: Introduction to Interpolation; Calculus of Finite Differences; Finite Difference and Divided
More informationA Regularized Interior-Point Method for Constrained Nonlinear Least Squares
A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique
More informationThe Kalman filter is arguably one of the most notable algorithms
LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive
More informationParallelizing large scale time domain electromagnetic inverse problem
Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationBSM510 Numerical Analysis
BSM510 Numerical Analysis Polynomial Interpolation Prof. Manar Mohaisen Department of EEC Engineering Review of Precedent Lecture Polynomial Regression Multiple Linear Regression Nonlinear Regression Lecture
More informationSection 6.6 Gaussian Quadrature
Section 6.6 Gaussian Quadrature Key Terms: Method of undetermined coefficients Nonlinear systems Gaussian quadrature Error Legendre polynomials Inner product Adapted from http://pathfinder.scar.utoronto.ca/~dyer/csca57/book_p/node44.html
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationImplementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization
Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter
More informationParallel-in-time integrators for Hamiltonian systems
Parallel-in-time integrators for Hamiltonian systems Claude Le Bris ENPC and INRIA Visiting Professor, The University of Chicago joint work with X. Dai (Paris 6 and Chinese Academy of Sciences), F. Legoll
More informationNATIONAL UNIVERSITY OF SINGAPORE PC5215 NUMERICAL RECIPES WITH APPLICATIONS. (Semester I: AY ) Time Allowed: 2 Hours
NATIONAL UNIVERSITY OF SINGAPORE PC5215 NUMERICAL RECIPES WITH APPLICATIONS (Semester I: AY 2014-15) Time Allowed: 2 Hours INSTRUCTIONS TO CANDIDATES 1. Please write your student number only. 2. This examination
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationDiffeomorphic Diffusion Registration of Lung CT Images
Diffeomorphic Diffusion Registration of Lung CT Images Alexander Schmidt-Richberg, Jan Ehrhardt, René Werner, and Heinz Handels Institute of Medical Informatics, University of Lübeck, Lübeck, Germany,
More information