Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience

Size: px
Start display at page:

Download "Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience"

Transcription

1 Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience Sergey Yu. Kamensky 1, Vladimir F. Boykov 2, Zakhary N. Khutorovsky 3, Terry K. Alfriend 4 Abstract We consider methods of minimizing quadratic forms with the least-squares technique (LQP). We demonstrate that the success of the commonly used variations of the Gauss-Newton methodology is based on employing considerations which are not related to the structure of the partial derivatives matrix. At the same time, a singular-value decomposition (SVD) of this matrix permits to estimate the dimensionality of the space for which the minimization can be successful at each step. We use orbit determination for a half-day, highly elliptical satellite type Molniya as an illustration. The orbit is built using optical measurements. The initial guess is an orbit with its plane turned 140 degrees from the actual position (i.e., almost the maximum possible deviation). This problem is considered to be a preliminary stage in the numerical studies of the convergence area for orbits of the Molniya type. Comparison of the Gauss-Newton results with variable steps and the SVD technique demonstrates the advantages of the latter. These results permit the creation of an algorithm, for which convergence is all but guaranteed and does not depend on the initial guess. Additional efforts needed in SVD do not represent a serious obstacle, given the available computational speeds of hundreds of teraflops. Introduction Given in this section is a brief review of the two main types of the minimization methods for the least-squares problem. A suite of the classical methods for minimization of LSP Presented in Fig, 1 are several techniques for minimization of a function with the least squares method. A brief description of these techniques is given below. 1. Let us consider the main techniques of solving the Gauss-Newton equations[1]: We will use to denote the symmetric matrix in the left-hand side of the equation. It can be proven with simple calculations that this symmetric matrix can be represented as, where is the lower triangular matrix. This expansion is called the Kholessky expansion. The system can be easily solved by solving two triangular systems consecutively 1 Vympel International Corporation, , Moscow, Russia, Chief designer 2 Vympel International Corporation, , Moscow, Russia, Lead scientist 3 Vympel International Corporation, , Moscow, Russia, Manager section 4 TEES Distinguished Research Chair Professor, USA, Texas A&M University

2 Fig 1. A suite of the classical methods for minimization of LSP Unfortunately, if the matrix P is not well defined, rounding errors can lead to small negative numbers on the diagonal. This is why a modified Kholessky technique was proposed. In this method, not the original matrix P, but a corrected one is built: in such a way that: All the elements of the diagonal matrix D are significantly positive, Absolute values of all the elements of the triangular matrix are uniformly bound from above. To satisfy these conditions, small additions are done to the matrix in the factorization process if needed. As a result, a corrected matrix is obtained instead of the original one, with a small difference (a diagonal matrix E). However, if the matrix is very poorly defined, the nonlinearity of the function which is being minimized starts playing a noticeable role. Indeed, if the matrix P is diagonalized by the an orthogonal transformation:

3 , then the solution is given as: The components corresponding to the small eigenvalues g i will have big components and the behavior of the function at such large distances from the initial point will not correspond to the linear approximation employed. Several methods were suggested to avoid getting into the area of poor approximation of the minimized function. 2. Minimization with respect to only some of the variables. It is assumed that the user knows that the function being minimized contains variables which affect the value of the function very significantly (or which are known only very roughly) and less important variables (or those known more precisely). Then the minimization proceeds in two stages. First, minimization with respect to the rough variables is done. Then the second stage, which may be absent in simple cases, the exact variables are taken care of. This idea is used for a great number of algorithms for specific cases of orbit refinement using only some variables. Since the efficiency of these algorithms depends on the initial knowledge of which variables belong to which group, they are usually used for a short time interval orbital refinement, when one can utilize knowledge of the accuracy of the initial measurements. 3. Normalization of the function which is being minimized. It is assumed that the user knows that the function has large derivatives with respect to some variables and small ones for the rest. Then an auxiliary function is produced, in which the weight of the variables with larger derivatives is reduced, and the others are weighed more heavily. The justification is that one can ignore fast derivatives in the beginning, since they can always be easily treated, for example, with the fastest descent minimization. 4. The ditch method proposed by I. M. Gelfand, a corresponding member of the Russian Academy of Sciences. It is assumed that there are directions of fast decrease of the value of the function which can be calculated with high accuracy, and also the ditch directions, in which the function decays very slowly and for which the first derivatives do not yield a sufficient accuracy. This method works as follows. The initial points are chosen, and fast descent is carried out to the bottom of the ditch. Then these bottom points are used to approximate partial derivatives with respect to the ditch variables, and the direction of the descent along the bottom is chosen. The minimum is sought in this direction. Then the process is repeated. This method is ideologically similar to the two above ones, but it does not require any initial knowledge for dividing the variables into two groups. 5. The dogleg technique, which is widely used in English-speaking countries (the reference is to the golf term). In this case, the fastest descent is used for the fast variables and the slow ones are treated by making a step with the Newton technique.

4 6. Another widely used method is the trust region one [2]. The initial assumption is that the user can define the size of the area in which the approximation of the function is accurate enough. In this case, the function of the least-square method is minimized with a condition of the step being within the trusted region, that is, the following problem is solved:, using the Lagrange multiplier It can be seen that this technique is somewhat similar to the Kholessky method with a matrix, but with a different initial motivation. The following equation is used to find the Lagrange multiplier: It can also be written as:, where are eigenvectors and eigenvalues of the matrix P. It can be seen from this formula that, as the Lagrange multiplier is increased, the eigenvectors corresponding to the small eigenvalues are suppressed. It can be said that in the trusted region technique, we also have separation of the eigenvectors into two groups, corresponding to the big and small eigenvalues, respectively. The borderline between the two sets is not sharp, which is determined by the form of the restriction, which is chosen for the sake of the simplicity of the mathematical formulation. To summarize the brief consideration of the main classical techniques for the least squares function minimization, we can point out the following: All these methods use the idea of breaking all the possible search directions into two groups. The first group contains directions, in which the search can be successful. The latter contains directions of unsuccessful searches. In order to realize such a division, some initial information is needed. It can have different shapes, from a direct instruction to a very sophisticated algorithm for determining the search direction in the trusted region technique. All these methods have similar advantages and disadvantages. The advantages are in relative simplicity and, as a result, a high computational speed. This simplicity is achieved by using information available a priori. This information is not obtained from the least squares technique and has to be guessed by the user. If the information is incorrect, the method will have very poor performance. Even in the trusted region case, which is the most developed algorithm, a simple rotation of the elliptical region by 90 degrees will make the algorithm stuck for a long time. Using this initial information, which does not follow from the structure of the matrix, is the main disadvantage of the classical methods.

5 An alternative technique, which does not employ any information known a priori, but rather determines the direction of the minimization via analysis of the structure of the matrix A, is the method of Singular Value Decomposition (SVD) of the matrix A. Singular value decomposition and LS orbit determination For the SVD introduce (1) where U consists of the n orthonormal eigenvectors corresponding to the n largest eigenvalues of and V is the matrix of orthonormal eigenvectors of the matrix. The diagonal elements of S are the square roots of the non-negative values of the eigenvalues of. The are called the singular values. Now introduce the vectors then reduces to Since S is a diagonal matrix, the influence of each component of can be observed immediately. Introducing the component into the solution, we reduce the square of the norm of the residual by probe solutions The probe vector the singular values the probe solution vector Now let the singular values be in descending order and consider the is the normal pseudo-solution of the least-squares problem, if we disregard and consider them equal to zero. From the probe vectors we obtain from (2) (3) (4) (5) where is the j-th column of V. The corresponding square of the norm of the residual is (6) Now assume that A is poorly defined, that is, some of the singular values are widely separated. The corresponding may be too large due to the small singular values. Thus, one needs to find an index k, such that the norm of the probe vector and the norm of the residual for this probe solution are small enough. With the singular values in descending order the procedure is. 1. Develop the matrix of trial vectors.

6 (7) where is the j-th column of V. 2. For each trial vector compute the expected decrease of the least squares error function using (8) 3. For check the acceptability of the trial vectors. To accomplish this check each of their components and determine if the following inequality is satisfied: That is, determine if the changes in each of the elements are less than some prescribed amount. If the inequality, eq. (9), is not satisfied for some j, calculate the required coefficient of decreasing step size (9) (10) After checking all the components, set (11) If the inequality is satisfied for all components, go to the next step. If not, the trial vector normalized by multiplying it by. is 4. Now check the relative decrease of the SVD method as we go to the next trial vector. If (12) then the trial vector is taken as the next iteration. 5. If the inequality in Step 4 is not satisfied, then the previous trial vector is used. 6. After computing the least squares function with, determine if the SVD method is converging sufficiently. Determine if (13) If this inequality is satisfied, go to the next step. If it is not satisfied, then the modified least squares method is used. because the Hessian is degenerate and the residuals need to be considered. Examination of convergence area of an algorithm as a function of the error in the initial guess

7 Studying the area of convergence for various algorithms as a function of the initial guess accuracy is very interesting from the practical standpoint, but it is also very difficult because of the high dimensionality of the space of the parameters of the initial guess, and because of additional parameters (time of following the object, accuracy of the measurements, etc.). This is why these studies are usually carried out by testing a certain set of representative problems with typical difficulties such as presence of ditches, bad scaling of the variables, etc. In order to have an exact solution for the benchmarking, the problems are usually polynomial, and with a small number of variables. Real problems are much more difficult, this is why conclusions made using these test problems are not always confirmed. We prefer a different path: choosing a rather complex problem of determining an orbit using some measurements and to thoroughly study the convergence area in the problem. Then we try to understand what defines this convergence area. It is known that the most difficult case in determining orbits with the least square technique is the one when the measurements are optical ones. In this case, instead of all the 6 components of the phase vector, only two angular components are known. They are related to the phase vector in a very nonlinear way. Additional difficulties arise in calculating the residual. The angular residual is determined with plus/minus one turn. Therefore, if the coordinate residual is thousands and tens of thousands of kilometers, the angular residual can be small and is never greater than π. As far as the time of following is concerned, two classes can be named: one night and long time periods of following the object. At the same time, if the object is followed for one night and the number of checkpoints is small, the main problem is the degeneracy, and thus the accuracy of determining an orbit close to a degenerate one [3]. The nonlinearity is not as important in this case. For example, it is shown that using the SVD technique gives a significant advantage in the orbital prediction accuracy in the case when the number of the measurements is small. Also given are several typical examples for illustrating the speed of convergence. It is especially difficult to obtain a good convergence when one has to deal with a series of measurements made during a long period of time, but a good initial guess is lacking. In the case of the Russian space control center, there are two classes of orbits, which are followed over long periods of time with optical measurements. They are quasistationary orbits with a period close to 24 hours and highly elliptical half-day orbits of the Molniya ( Lightning ) type. The center has an algorithm for finding an initial guess with three pairs of angular measurements. It is based on an effective orbit determination technique using two positions of Bettin. However, if there are big or anomalous errors present in measurements, it could be difficult to find a good trio of measurement data. If there is a lapse in observations, orbits are determined by parts and they are then are pasted together.

8 Finally, the accuracy of the technique depends on how sensitive it is to the initial guess. This is why work was carried out to determine orbits with big time intervals and poor initial guesses. There is the natural initial guess for the stationary orbits: the big semi-axis corresponds to the 24-hour orbit, and all the other parameters are set to zero. The time interval is determined from the condition of having only one value of the residuals: depending on the error in the projected number of turns, there are many local minima in the big semi-axis. This is why for the given error in the initial guess with respect to the big semi-axis, the time interval was chosen in such a way that, with this error, the more-than-one-value problem is absent. Experiment has shown that convergence at the 10-day interval can be achieved with the natural initial guess. Finally, the most difficult case is found in the half-day highly elliptical satellites. Results of experiments with this class of orbits are considered in the next section. This section is devoted to a more difficult task of determining an orbit of a highly elliptical half-day object using optical measurements during a 8-week interval. One can hope that it is in these difficult problems that the SVD technique will be demonstrated to be superior to the classical methods. Study using a highly elliptical half-day object of the Molniya ( Lightning ) type). We have chosen an example in which a dense following of the object was carried out for 8 weeks with one station. Such a long following is an exception in our conditions. Usually, measurements are done on small intervals with big lapses between them. Such followings are not desirable for studying the convergence, since they make it impossible to observe a continuous picture of changing of the residuals. Usually, everything is fine until a lapse is reached, everything is bad after the lapse, and it is not clear why. Let us consider the results of the calculations with a brief comment, as an example. Presented in Table 1 are the initial values for the example in the elements of (the first line) and Kepler. Table 1 u(deg) Ω(deg) ω(deg) а(km) e i(deg)

9 Given in Table 2 is an example of convergence in the case when a good initial guess is available. Given in the first column is the iteration number. The second gives the value of the functions. The other seven columns contain elements of the orbit obtained for the specific iteration, in two versions. The top line has the Lagrangian elements, the bottom one the Kepler elements. The two title lines contain the usual letter notations for the elements. The following tables will have a similar structure. Table 2 Iter F λ L p q h k u Ω ω а(km) e i e e ; As can be seen from the table, only the h,k elements have small changes. But these changes cause 6-order changes in the function. Table 3 contains results of solving a more difficult example problem. In this case, the elements of the orientation of the plane in the initial guess are turned by 140 degrees!!! As can be seen from the table, the minimization proceeded in a strange way, with practically all elements changing. This example demonstrated two peculiarities of the highly elliptical object which are not present for the stationary ones. Table 3 Iter F λ L p q h k u Ω ω A(km) e i

10 e e e e e e At the first iteration, the calculated step has lead to an unacceptable point. An attempt to calculate the orbital elements (by Kepler) would lead to a crash. This is why special restrictions were introduced into the algorithm for determining various parameters at a step. If the parameter values come to be unacceptable, the step size is divided by 5. A similar situation emerged with the perigee height at the second iteration the object went underground. Once again, the guard worked and the step was divided by two. All the following steps proceeded without such scaling. The convergence became pretty fast at the last iterations. We will analyze the minimization process with the SVD technique. Given in Table 4 are the results of the first iteration calculations for the test vectors y k using the equations 1 3, and the symbol x is used to denote the Gauss-Newton step vector obtained with equation 7.

11 Considering this vector, we can see that the value of the parameter p is such that this step does not satisfy the condition of. This is why the first iteration step in the Newton-Gauss case is divided by 5. Let us now consider the process of solving the same problem with the SVD method. Let us calculate a sequence of test vectors using equation 7 and the data in Table 4. The resulting first iteration vectors are listed in Table 5. It is easy to see that the test vectors for numbers 4 and above do not satisfy the natural conditions for the elements h, k. Therefore, let us consider the test vector with the dimensionality of 3. Results of calculations with this restriction are given in Table 6. Comparison of the first three rows of this table with the corresponding rows in Table 3 demonstrates a faster convergence. Table x λ L 5.081e e p q h k Table λ

12 L p q h k Table 6 Iter F λ L p q h k e e e e Conclusion This work is a continuation of the one described in [3]. Altogether, these two articles consider the main types of orbits and demonstrate the possibility of using an SVD-based minimization technique for the minimization. The general conclusion from this research can be formulated as follows: the more complex the minimization problem, the greater advantage in robustness is provided by the SVD technique. There are some real application cases when convergence for a certain subset of dimensions cannot be obtained with the conventional methods. In these cases, it would be very beneficial to have an estimate of the guaranteed convergence area for the minimization algorithm based on experience. This is why it makes sense to continue studying algorithms based on the SVD method for the difficult case of highly elliptical orbits and to attempt to obtain at least a rough estimate for the parametric set of convergence

13 References 1. P.E. Gill, W. Murray, M.H. Wright, Practical Optimization, Academic Press, A. R. Conn, N. I. M. Gould, and Ph. L. Toint, Trust-Region Methods. No. 1 in the MPS SIAM series on optimization. SIAM, Philadelphia, USA, V.F. Boykov, Z.N. Khutorovskiy, K.T. Alfriend. Singular value decomposition and least squares orbit determination, 7 th US/Russian Space Surveillance Workshop Proceeding, 2007.

SINGULAR VALUE DECOMPOSITION AND LEAST SQUARES ORBIT DETERMINATION

SINGULAR VALUE DECOMPOSITION AND LEAST SQUARES ORBIT DETERMINATION SINGULAR VALUE DECOMPOSITION AND LEAST SQUARES ORBIT DETERMINATION Zakhary N. Khutorovsky & Vladimir Boikov Vympel Corp. Kyle T. Alfriend Texas A&M University Outline Background Batch Least Squares Nonlinear

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues

More information

Backward Error Estimation

Backward Error Estimation Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles

More information

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Linear algebra for MATH2601: Theory

Linear algebra for MATH2601: Theory Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Optimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this: Chapter 2. 2.0 Introduction Solution of Linear Algebraic Equations A set of linear algebraic equations looks like this: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1N x N =b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C. Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning

More information

Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland

Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland Example questions for Molecular modelling (Level 4) Dr. Adrian Mulholland 1) Question. Two methods which are widely used for the optimization of molecular geometies are the Steepest descents and Newton-Raphson

More information

Numerical Methods for Inverse Kinematics

Numerical Methods for Inverse Kinematics Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles.

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS

RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS (Preprint) AAS 12-202 RELATIVE NAVIGATION FOR SATELLITES IN CLOSE PROXIMITY USING ANGLES-ONLY OBSERVATIONS Hemanshu Patel 1, T. Alan Lovell 2, Ryan Russell 3, Andrew Sinclair 4 "Relative navigation using

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

A recursive model-based trust-region method for derivative-free bound-constrained optimization.

A recursive model-based trust-region method for derivative-free bound-constrained optimization. A recursive model-based trust-region method for derivative-free bound-constrained optimization. ANKE TRÖLTZSCH [CERFACS, TOULOUSE, FRANCE] JOINT WORK WITH: SERGE GRATTON [ENSEEIHT, TOULOUSE, FRANCE] PHILIPPE

More information

Numerical methods part 2

Numerical methods part 2 Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33 Content (week 6) 1 Solution of an

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

1 Introduction

1 Introduction 2018-06-12 1 Introduction The title of this course is Numerical Methods for Data Science. What does that mean? Before we dive into the course technical material, let s put things into context. I will not

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) oday s opics Multidisciplinary System Design Optimization (MSDO) Approimation Methods Lecture 9 6 April 004 Design variable lining Reduced-Basis Methods Response Surface Approimations Kriging Variable-Fidelity

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

Basic Math for

Basic Math for Basic Math for 16-720 August 23, 2002 1 Linear Algebra 1.1 Vectors and Matrices First, a reminder of a few basic notations, definitions, and terminology: Unless indicated otherwise, vectors are always

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem Calibration and Reliability in Groundwater Modelling (Proceedings of the ModelCARE 99 Conference held at Zurich, Switzerland, September 1999). IAHS Publ. no. 265, 2000. 157 A hybrid Marquardt-Simulated

More information

Appropriate Starter for Solving the Kepler s Equation

Appropriate Starter for Solving the Kepler s Equation Appropriate Starter for Solving the Kepler s Equation Reza Esmaelzadeh Space research institute Tehran, Iran Hossein Ghadiri Space research institute Tehran, Iran ABSTRACT This article, focuses on the

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

nonlinear simultaneous equations of type (1)

nonlinear simultaneous equations of type (1) Module 5 : Solving Nonlinear Algebraic Equations Section 1 : Introduction 1 Introduction Consider set of nonlinear simultaneous equations of type -------(1) -------(2) where and represents a function vector.

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

A Revised Modified Cholesky Factorization Algorithm 1

A Revised Modified Cholesky Factorization Algorithm 1 A Revised Modified Cholesky Factorization Algorithm 1 Robert B. Schnabel Elizabeth Eskow University of Colorado at Boulder Department of Computer Science Campus Box 430 Boulder, Colorado 80309-0430 USA

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used: (In practice not Gram-Schmidt, but another process Householder Transformations are used.) QR Decomposition When solving an overdetermined system by projection (or a least squares solution) often the following

More information

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x! The residual again The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute r = b - A x! which gives us a measure of how good or bad x! is as a

More information

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997

More information

Mini-Course 1: SGD Escapes Saddle Points

Mini-Course 1: SGD Escapes Saddle Points Mini-Course 1: SGD Escapes Saddle Points Yang Yuan Computer Science Department Cornell University Gradient Descent (GD) Task: min x f (x) GD does iterative updates x t+1 = x t η t f (x t ) Gradient Descent

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

Derivative-Free Trust-Region methods

Derivative-Free Trust-Region methods Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework

More information

Singular value decomposition

Singular value decomposition Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Line Search Algorithms

Line Search Algorithms Lab 1 Line Search Algorithms Investigate various Line-Search algorithms for numerical opti- Lab Objective: mization. Overview of Line Search Algorithms Imagine you are out hiking on a mountain, and you

More information

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

A multilevel, level-set method for optimizing eigenvalues in shape design problems

A multilevel, level-set method for optimizing eigenvalues in shape design problems A multilevel, level-set method for optimizing eigenvalues in shape design problems E. Haber July 22, 2003 Abstract In this paper we consider optimal design problems that involve shape optimization. The

More information

Orbital and Celestial Mechanics

Orbital and Celestial Mechanics Orbital and Celestial Mechanics John P. Vinti Edited by Gim J. Der TRW Los Angeles, California Nino L. Bonavito NASA Goddard Space Flight Center Greenbelt, Maryland Volume 177 PROGRESS IN ASTRONAUTICS

More information

Lecture 4: Applications of Orthogonality: QR Decompositions

Lecture 4: Applications of Orthogonality: QR Decompositions Math 08B Professor: Padraic Bartlett Lecture 4: Applications of Orthogonality: QR Decompositions Week 4 UCSB 204 In our last class, we described the following method for creating orthonormal bases, known

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Physics 221A Fall 2017 Notes 27 The Variational Method

Physics 221A Fall 2017 Notes 27 The Variational Method Copyright c 2018 by Robert G. Littlejohn Physics 221A Fall 2017 Notes 27 The Variational Method 1. Introduction Very few realistic problems in quantum mechanics are exactly solvable, so approximation methods

More information

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes Iterative Methods for Systems of Equations 0.5 Introduction There are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system

More information