R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

Size: px
Start display at page:

Download "R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the"

Transcription

1 A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m numerical method is proposed which rst minimizes each f separately and then applies a penalty strategy to gradually force the dierent minimizers to coalesce Though the auxiliary nonlinear least-squares method works on IR nm, it is shown that the additional computational requirements consist of m 0 gradient evaluations plus O(m n) operations The application to discrete rational approximation is discussed and numerical examples are given x Introduction Assume a nonlinear unconstrained mimimization problem f(x)! min x2ir n! () for a smooth function f : IR n! IR to be given, and assume the problem () to have several local minimizers We want to construct an algorithm that enhances the probability to nd global minimizers Led by the application to nonlinear least squares problems (which will be described in section 5) we assume f to be additively decomposable in the form f(x) = where each f is easier to minimize than f itself f (x); f : IR n! IR n smooth (2) Mathematical Methods in CAGD and Image Processing Tom Lyche and L L Schumaker (eds), pp {20 Copyright c 992 by Academic Press, Boston ISBN X All rights of reproduction in any form reserved

2 2 R Schaback The basic idea of the multi{parameter algorithm to be constructed will be borrowed from multiple shooting methods in ordinary dierential equations: each f gets a separate parameter vector y 2 IR n to be optimized and then the separate minimizing y will be gradually forced to coalesce, oining the separate minimizers gradually into a global solution This strategy is not conned to additive decompositions In more detail, we proceed along the following steps: Starting from a given vector x (0) 2 IR n, set y (0) := x (0) ; m and calculate a (possibly only local) minimizer y () 2 IR n of f for m 2 Choosing a nonnegative penalty function P with control parameters 3 to be described later, perform an iterative minimization of the function f (y + y) + P (y ; : : : ; y m ; 3); y; y 2 IR n ; m (3) on (y ; : : : ; y m ; y) 2 IR n(m+), starting in (y () ; : : : ; y () ; 0) and producing a minimizer (y (2) ; : : : ; y(2) m ; y (2) ) Here we assume P (y ; : : : ; y m ; 3) = 0 i y = 0 for all, and the penalty parameters 3 should be suitably controlled during the optimization 3 Starting from x (2) := y (2) + m mp y (2), minimize () on IR n by an arbitrary unconstrained optimization procedure to get a minimizer x (3) 2 IR n Note that the algorithm consists of a single execution of steps {3 However, steps 2 and 3 contain an inner iteration Depending on the penalty strategy in step 2, we will often have y (2) = 0 for all and x (2) = y (2) will already be a local minimizer of f, such that step 3 will often not be necessary at all Nevertheless, the overall algorithm works from IR n to IR n as x (0) 7! x (3) and the detour via IR n(m+) in step 2 can be concealed from the user We could do step 2 without the additional vector y and minimize f (y ) + P (y 0 y; : : : ; y m 0 y; 3) for (y ; : : : ; y m ) 2 IR mn ; where y := mp m y However, the form used in (3) is more easily handled The main problems to be addressed in the sequel are Can the computational eort be reduced to a reasonable amount? 2 How should 3 in step 2 be controlled? 3 Are there some classes of problems where the algorithm is superior over conventional methods? For simplicity, we shall conne the maor part of the rest of the paper to nonlinear least{squares calculations and postpone the general nonlinear optimization problem to later investigations

3 Multi{Parameter Method 3 x2 Penalty functions Depending on how much control is required, one can consider dierent penalty functions of the general form P (y ; : : : ; y m ; 3), eg: 2 ky k 2 2 ; 2 IR; or (4) 2 ky k 2 2 ; 2 IR; m; or (5) with n 2 n nonsingular diagonal matrices 3 = k3 y k 2 2 (6) 0 B@ 0 0 n CA ; m: (7) We shall see later that good control of the parameters of the penalty function is a maor problem which forces us to restrict ourselves to the simple case (4) when specic control strategies are treated later However, the computational complexity investigations of the next section allow rather general controls of type (5) or (6) x3 Computational Complexity We consider a Gauss{Newton minimization step for the general composite obective function f 2 (y + y) + k3 y k 2 2 ; y; y 2 IR n ; m (8) with (6) and (7) This arises within step 2 of the algorithm and involves n (m + ) variables instead of ust n variables for the classical approach Thus we compare it with the corresponding step for minimization of the least{ squares obective function f(x) = f 2 (x); x 2 IR n (9) to nd that there is not too much additional work to be done:

4 4 R Schaback Theorem A linear multi{parameter least{squares problem arising from linearization of the obective function (8) on the space IR n(m+) can be solved by adding O(m n) operations to the solution of the corresponding problem arising from linearization of the least{squares function (9) on IR n, which needs O(m n 2 ) operations However, the number of gradient and function evaluations increases from to m Proof: We can write (8) as with H 3 (y ; : : : ; y m ; y) := 0 B@ kh 3 (y ; : : : ; y m ; y)k y 3 m y m f (y + y) f m (y m + y) CA ; H 3 : IR mn+n! IR mn+m : Writing gradients as row vectors, we calculate the Jacobian of H 3 as J 3 = 0 B@ 3 0 : : : : : : 3 m 0 rf 0 : : : 0 rf 0 rf 2 : : : 0 rf : : : rf m rf m where the argument of rf is y + y throughout This requires m evaluations of gradients We linearize H 3 at y ; : : : ; y m ; y and consider the problem 0 z kj 3B@ z m z CA CA + H 3(y ; : : : ; y m ; y)k 2 2! min! for z; z 2 IR n ; m This linear least{squares problem splits into the sum 3 y f + rf z 2 (0) 3 rf z + 2

5 Multi{Parameter Method 5 and we rst minimize each single term with respect to z for arbitrary xed z With a positive denite n 2 n diagonal matrix D, vectors u; v 2 IR n, and a scalar we want to nd the minimizer ~x 2 IR n of an expression of the form D u T x + A straightforward calculation yields with v 2 2! min x! ~x = 0(D 2 + uu T ) 0 (Dv + u) = 0D 0 v 0 D 02 u D v u T ~x = 0 ut D 0 v + u T D 02 u ; = 2 ( + kd 0 uk 2 2) Thus the minimizer of (0) for xed z satises = ( 0 u T D 0 v) 2 =( + kd 0 uk 2 2 ): z = 0y 0 f 0 rf (y 0 z) + k3 0 rf k (rf ) T () as a function of z and we insert () into (0) to get another linear least{ squares problem for the vector z 2 IR n as + k3 0 f k 2 2 (f 0 rf (y 0 z)) 2! min z (2) which can be considered as a variation of a usual m by n least{squares calculation required after linearization of (2) The additional work needed for (2) arises in forming the m by n coecient matrix and the m right{hand{side entries, and is of order O(m n) After solving (2) by orthogonalization or singular{value decomposition, the rest will only consist of substitution of z into (), which again will introduce O(m n) operations For the penalty function (4) and a Levenberg{Marquardt strategy based on normal equations a similar result was obtained by K Nottbohm in [4] x4 Control of penalty parameters Discrete least{squares approximation of a function ' : T! IR on a nite set T = ft ; : : : ; t m g of m distinct points by a parametrized family of functions 8(x; ) : T! IR; x 2 IR n

6 6 R Schaback takes the form (9) with f (x) = 8(x; t ) 0 '(t ); m: (zwoelfa) Section 5 will treat rational approximation as an example for this approach In most applications the sets X := fx 2 IR n 8(x; t ) = '(t )g ; m are non{empty and easy to reach by minimization of f 2(x) on IRn Consequently, the direct separate minimization of each f 2 in step will normally produce parameters y () 2 X In most cases the interpolation equation f (y () ) = 8(y () ; t ) 0 '(t ) = 0 (4) still has n 0 degrees of freedom available for variation of y () This freedom should be used to nd parameters y () which are \as equal as possible" without violating the interpolation condition (4) Thus it is advisable to begin step 2 of the algorithm by keeping all penalty parameters at very small positive and xed values and to iterate on H 3 within step 2 until no reasonable progress is possible The overall control strategy can be illustrated by a diagram like Figure P, where the values f = m f 2(y + y) of iterates are plotted against g, where g is the value of the penalty function (4) for = f f`2 f` Step f g Step 2 g Figure The f - g diagram P The global minimizer of f = m f 2(x) on IRn yields a minimal value f g on the f axis, while the values of local best approximations are denoted by f`; f`2; : : : in Figure A typical application of step will dash down to the g axis from a high starting point on the f axis Then our suggestion to start step 2 with small values of penalty parameters will cause the method to creep along the g axis towards the origin, avoiding large values of f This can be put on a more rigorous basis by using small values of in

7 Multi{Parameter Method 7 Theorem 2 Let the given starting values y := x () ; m; y := 0 and > 0 satisfy > f 2 (x() ): The minimization of f 2 (y + y) + 2 ky k 2 (5) for the penalty function (4) with bounded by P 0 m f 2(x() ) 0 < 2 mkx () k 2 will yield parameters y (2) ; y (2) m with f 2 (y (2) + y (2) ) ; provided that an iteration with guaranteed descent of (5) is used Proof: For each iterate, f 2 (y + y) : f 2 (y + y) + 2 f 2 (x () ) + 2 ky k 2 2 kx () k 2 2 Similar results can easily be obtained for the other penalty functions The maor purpose of Theorem 2 is to keep the principal error term within reasonable bounds while trying to coalesce the parameters as far as possible Another easy result follows for large in the penalty function (4): Theorem 3 Assume y ; y and y ; y ; m in IR n satisfy f 2 (y + y ) + 2 f 2 (y + y ) + 2 ky k2 2 ky k 2 2 f 2 (y + y ) + 2 f 2 (y + y ) + 2 ky k2 2 ky k2 2 : (6)

8 8 R Schaback Then for < we have ky k 2 2 f 2 (y + y ) ky k2 2 ; f 2 (y + y ): (7) If the monotonic function f 2 (y + y ) + 2 ky k 2 2 stays bounded for!, then y! 0 for! and all ; m The theorem is a consequence of the theory of penalty function methods (see eg: Fletcher [2]) Inequalities (6) will typically hold if (5) is minimized by a method that ensures (weak) descent and if y ; y and y ; y are optimal for minimization for and xed, respectively, where y ; y are admissible for minimization with xed, and conversely Then Theorem 3 allows to force the y to coalesce for large, if the penalty function (5) remains bounded during the iteration This can be used as the nal strategy of the internal iteration within step 2 of the basic algorithm Standard arguments of nonlinear optimization show that minimization of (5) for large is equivalent to minimization of (2) under a rather strict constraint on the y Thus, if the limits ky k 2 " lim! y = 0; lim! y = y exist, the point y is critical for (2) This shows that step 3 is not necessary, if the penalized obective function remains bounded when the penalty parameters are driven to innity in step 2 The two extreme cases described above leave the strategy of the internal iteration within step 2 for \moderate" values of open As described by (7), pushing up will normally increase decrease f 2 (y + y ) =: f ; and ky k 2 2 =: g ;

9 Multi{Parameter Method 9 while lowering has the opposite eect If the value of f gets much larger as expected or wanted when pushing up, the user may prefer to go back to = 0 for a couple of iterations, hoping that this restart will give a better starting point for step 2 Monitoring the values (f ; g ) in a diagram like Figure may help to detect progress For all penalty functions of section 2 the critical points (y 3 ; : : : ; y 3 m; y 3 ) of kh 3 k 2 for 3 xed correspond to points z 3 = y3 + y3 of the set M := This follows from 8< 0 :@ z A 2 IR mn X m z m 2 r ykh 3 k 2 = 2 r y f 2 (y + y) = 9= f (z ) rf (z ) = 0; : (f r y f )(y + y) for arguments (y ; : : : ; y m ; y) of H 3, independent of the penalty parameters Control of 3 will steer y 3 + y3 along M, possibly umping between connected components of M Penalty function (4), if fully optimized for each value of, will generically produce pieces of curves on M, which may be visualized as curves in Figure if we plot (f ; g ) as a function of However, we do not recommend to follow these curves closely, because this would spoil the main advantage of the method: its high space dimension allows a lot of freedom to manoeuver around Performing ust one linearization step for each instance of penalty parameters 3 is quite enough and allows much more freedom, umping between many traectories on M f f`2 f` f g S g Figure 2 The unattainable set S

10 0 R Schaback Theoretically, the best strategy would be to follow the boundary of the \unattainable set" S = 8>< >: (f; g) 2 IR2 0 9 if g = ky k 2 ; y 2 IR n ; then > f 2 (y + y) > f for all y 2 IR =>; n in Figure 2, which always links the global minimum of (2) with the global minimum of P m f 2 (y ) having least value of mp ky k 2 However, this is not easier to solve than the original problem, but it indicates that one should try to keep as \southwest" in Figures and 2 as possible Since the set M is described by n equations on IR nm, the penalty function (6) has enough degrees of freedom to allow appropriate \steering" along M in a sophisticated way that may involve a lot of information about f (y ) or rf (y ) Thus the user may assign large penalty parameters to \good" y (or components thereof) to make sure that they will not be moved around too much Details can easily be studied for m = 2 and n =, but we will leave further analysis to future work Ute Jager [3] treated the penalty function (4) with 2 (g ) 0 and showed that nondegenerate attractors ~x for (9) on IR n yield nondegenerate attractors (0; : : : ; 0; ~x) for (5) on IR n(m+), provided that a well{controlled trust{region method is used x5 Application to discrete rational approximation On a nite subset T = ft ; : : : ; t m g [0; +] we consider the approximation of a function ' : T! IR by rational functions where p(x; t) = 8(x; t) = kx i= p(x; t) q(x; t) x i t i0 ; q(x; t) = + t 2 [0; +] (8) nx i=k+ x i t i0k : We use (3) and obtain a nonlinear least{squares problem of type (9) To study step of the algorithm, we note that f (x) vanishes on the ane subspace of IR n consisting of all x 2 IR n with for m except for points where p(x; t ) 0 '(t ) q(x; t ) = 0 (9) p(x; t ) = 0 = q(x; t )

11 Multi{Parameter Method holds, and at which l'hospital's rule for evaluating 8(x; t ) does not yield '(t ) If we ignore these exceptional x for a moment, we nd that minimizers y () of f 2 with f (y () ) = 0 exist and form an ane subspace Thus there normally are no problems with step of the algorithm If step 2 is carried out with very small penalty parameters, it is comparable to a minimization of the positive denite penalty term under the ane constraints (9) Thus step 2 yields a unique minimizer (y (2) ; : : : ; y(2) m ; y (2) ) depending on 3, but not on common factors of the various possible penalty variables in 3 In particular, the minimizer does not depend on the result of step It is a multiple parameter set that satises all linearized one{point interpolation problems p(y (2) ; t ) 0 '(t )q(y (2) ; t ) = 0; m; and step 2 has tried to make the parameters y (2) as equal as possible Thus for all discrete rational approximation problems a xed implementation of the multi{parameter method will almost surely produce the same output for each possible starting parameter vector, since the exceptional points form a set of measure zero Of course, this feature does not generalize to other nonlinear families of approximating functions We take the example f 2 (x; y) = i= f 2 i (x; y); x6 Examples 0 B@ f (x; y) f 2 (x; y) f 3 (x; y) f 4 (x; y) CA = from [4] for n = 2; m = 4 with three critical points: 0 B@ 0 x + 25xy + x CA 0 y + y at (x; y) = (0; 0) is a saddle point with f(x; y) = 2 at (x; y) = (0:2; 00:24) is a local minimum with f(x; y) = :843 at (x; y) = (0:006; 0:079) is a global minimum with f(x; y) = :49: For various possibilities of controlling in step 3 we always found the multi{ parameter method to converge to the global minimum (see also [3]) Figure 3 contains a contour plot of the approximation error for this example, where the saddle point and the global minimum are clearly visible The local minimum is overlaid by its surrounding (black) basin of attraction of a regularized Gauss{Newton method Regularization was done by a Levenberg{Marquardt strategy to prevent rank loss, and damping by a stepsize strategy was used to enforce descent even for small or zero Levenberg{Marquardt parameters The same Gauss{Newton routine was used in step 3 of the multi{paremeter algorithm The corresponding plot for the multi{parameter method does not

12 2 R Schaback contain the black blob, because the basin of attraction of the local minimum with respect to the multi{parameter method was empty Figure 3 Contours and attractors The next example is taken from [] and approximates '(t) = t 2 0 0:6 on equidistant points of [0; ] by rational functions (8) with constant numerator and linear denominator The problem is symmetric, and we plot the error in the (x ; x 2 ){plane in Figure 4 Note that there are singularities for x 2 = 05= for 5, and the horizontal axis x 2 = 0 extends through the global minimum at x = 00:2; x 2 = 0 near the lower margin of Figure 4 The local minima and error values are x = 00: x 2 = 0: = :72 x = 00:03505 x 2 = 6: = :232 x = 00:05399 x 2 = 6: = :29 x = 00: x 2 = 62: = :2 x = 00:06759 x 2 = 64: = :207 the latter diering only by about 3 %, making the problem hard to solve globally, if not started near the global minimum Figure 5 shows the basins of attraction of the stabilized Gauss{Newton method, where we used black and white alternatively for the attractors Figure 4 Contours of approximation error

13 Multi{Parameter Method 3 The multi{parameter method always converged to the global minimum, though the global minimum does not have the \nicest" basin of attraction Other discrete rational approximation problems we tested were less hazardous In all cases the results were qualitatively the same Tests on discrete exponential approximation problems, however, did not show such a good behaviour Figure 5 Gauss{Newton attractors Acknowledgements: Special thanks go to Dr Immo Diener for a series of useful discussions and some valuable help preparing the nal version of the paper References Diener, I, On Nonuniqueness in Nonlinear L 2 -Approximation, J Approx Th 5 (987), Fletcher, R, Practical Methods of Optimization Vol: Unconstrained Optimization, Wiley, Chichester, Jager, U, Das Mehrfach{Parameter{Verfahren unter Benutzung der trust region{methode zur Losung nichtlinearer, diskreter Approximationsprobleme in der L 2 {Norm, Diplomarbeit, Gottingen Nottbohm, K, Das Mehrfach{Parameter{Verfahren zur Losung nichtlinearer diskreter Approximationsprobleme in der L 2 {Norm PhD Dissertation, Gottingen 986 R Schaback Georg{August{Universitat Gottingen Lotzestrasse 6{8 D{3400 Gottingen, Germany schaback@ namu0gwdgde

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

COS 424: Interacting with Data

COS 424: Interacting with Data COS 424: Interacting with Data Lecturer: Rob Schapire Lecture #14 Scribe: Zia Khan April 3, 2007 Recall from previous lecture that in regression we are trying to predict a real value given our data. Specically,

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Error Empirical error. Generalization error. Time (number of iteration)

Error Empirical error. Generalization error. Time (number of iteration) Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A characterization of consistency of model weights given partial information in normal linear models

A characterization of consistency of model weights given partial information in normal linear models Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care

More information

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Likelihood Ratio Tests and Intersection-Union Tests. Roger L. Berger. Department of Statistics, North Carolina State University

Likelihood Ratio Tests and Intersection-Union Tests. Roger L. Berger. Department of Statistics, North Carolina State University Likelihood Ratio Tests and Intersection-Union Tests by Roger L. Berger Department of Statistics, North Carolina State University Raleigh, NC 27695-8203 Institute of Statistics Mimeo Series Number 2288

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Physics 221A Fall 2017 Notes 27 The Variational Method

Physics 221A Fall 2017 Notes 27 The Variational Method Copyright c 2018 by Robert G. Littlejohn Physics 221A Fall 2017 Notes 27 The Variational Method 1. Introduction Very few realistic problems in quantum mechanics are exactly solvable, so approximation methods

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Newton Method: General Control and Variants

Newton Method: General Control and Variants 23 Newton Method: General Control and Variants 23 1 Chapter 23: NEWTON METHOD: GENERAL CONTROL AND VARIANTS TABLE OF CONTENTS Page 23.1 Introduction..................... 23 3 23.2 Newton Iteration as Dynamical

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Abstract Minimal degree interpolation spaces with respect to a nite set of

Abstract Minimal degree interpolation spaces with respect to a nite set of Numerische Mathematik Manuscript-Nr. (will be inserted by hand later) Polynomial interpolation of minimal degree Thomas Sauer Mathematical Institute, University Erlangen{Nuremberg, Bismarckstr. 1 1, 90537

More information

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen

More information

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL

Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh

More information

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

On the Convergence of Newton Iterations to Non-Stationary Points Richard H. Byrd Marcelo Marazzi y Jorge Nocedal z April 23, 2001 Report OTC 2001/01 Optimization Technology Center Northwestern University,

More information

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE UDREA PÄUN Communicated by Marius Iosifescu The main contribution of this work is the unication, by G method using Markov chains, therefore, a

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

3 Stability and Lyapunov Functions

3 Stability and Lyapunov Functions CDS140a Nonlinear Systems: Local Theory 02/01/2011 3 Stability and Lyapunov Functions 3.1 Lyapunov Stability Denition: An equilibrium point x 0 of (1) is stable if for all ɛ > 0, there exists a δ > 0 such

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

The iterative convex minorant algorithm for nonparametric estimation

The iterative convex minorant algorithm for nonparametric estimation The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear discriminant functions

Linear discriminant functions Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative

More information

Number of cases (objects) Number of variables Number of dimensions. n-vector with categorical observations

Number of cases (objects) Number of variables Number of dimensions. n-vector with categorical observations PRINCALS Notation The PRINCALS algorithm was first described in Van Rickevorsel and De Leeuw (1979) and De Leeuw and Van Rickevorsel (1980); also see Gifi (1981, 1985). Characteristic features of PRINCALS

More information

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi Principal Invariants of Jacobi Curves Andrei Agrachev 1 and Igor Zelenko 2 1 S.I.S.S.A., Via Beirut 2-4, 34013 Trieste, Italy and Steklov Mathematical Institute, ul. Gubkina 8, 117966 Moscow, Russia; email:

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

f = 2 x* x g 2 = 20 g 2 = 4

f = 2 x* x g 2 = 20 g 2 = 4 On the Behavior of the Gradient Norm in the Steepest Descent Method Jorge Nocedal y Annick Sartenaer z Ciyou Zhu x May 30, 000 Abstract It is well known that the norm of the gradient may be unreliable

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Vector Spaces ปร ภ ม เวกเตอร

Vector Spaces ปร ภ ม เวกเตอร Vector Spaces ปร ภ ม เวกเตอร 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition and

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv Math 1270 Honors ODE I Fall, 2008 Class notes # 1 We have learned how to study nonlinear systems x 0 = F (x; y) y 0 = G (x; y) (1) by linearizing around equilibrium points. If (x 0 ; y 0 ) is an equilibrium

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Towards Decision Making under General Uncertainty

Towards Decision Making under General Uncertainty University of Texas at El Paso DigitalCommons@UTEP Departmental Technical Reports (CS) Department of Computer Science 3-2017 Towards Decision Making under General Uncertainty Andrzej Pownuk University

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica

More information

1. Introduction This paper describes the techniques that are used by the Fortran software, namely UOBYQA, that the author has developed recently for u

1. Introduction This paper describes the techniques that are used by the Fortran software, namely UOBYQA, that the author has developed recently for u DAMTP 2000/NA14 UOBYQA: unconstrained optimization by quadratic approximation M.J.D. Powell Abstract: UOBYQA is a new algorithm for general unconstrained optimization calculations, that takes account of

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Techinical Proofs for Nonlinear Learning using Local Coordinate Coding

Techinical Proofs for Nonlinear Learning using Local Coordinate Coding Techinical Proofs for Nonlinear Learning using Local Coordinate Coding 1 Notations and Main Results Denition 1.1 (Lipschitz Smoothness) A function f(x) on R d is (α, β, p)-lipschitz smooth with respect

More information

An Adaptive Bayesian Network for Low-Level Image Processing

An Adaptive Bayesian Network for Low-Level Image Processing An Adaptive Bayesian Network for Low-Level Image Processing S P Luttrell Defence Research Agency, Malvern, Worcs, WR14 3PS, UK. I. INTRODUCTION Probability calculus, based on the axioms of inference, Cox

More information

Essentials of Intermediate Algebra

Essentials of Intermediate Algebra Essentials of Intermediate Algebra BY Tom K. Kim, Ph.D. Peninsula College, WA Randy Anderson, M.S. Peninsula College, WA 9/24/2012 Contents 1 Review 1 2 Rules of Exponents 2 2.1 Multiplying Two Exponentials

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

LQ Control of a Two Wheeled Inverted Pendulum Process

LQ Control of a Two Wheeled Inverted Pendulum Process Uppsala University Information Technology Dept. of Systems and Control KN,HN,FS 2000-10 Last rev. September 12, 2017 by HR Reglerteknik II Instruction to the laboratory work LQ Control of a Two Wheeled

More information

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing Kernel PCA Pattern Reconstruction via Approximate Pre-Images Bernhard Scholkopf, Sebastian Mika, Alex Smola, Gunnar Ratsch, & Klaus-Robert Muller GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany fbs,

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

A Posteriori Error Bounds for Meshless Methods

A Posteriori Error Bounds for Meshless Methods A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z

. Consider the linear system dx= =! =  a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real

More information

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN H.T. Banks and Yun Wang Center for Research in Scientic Computation North Carolina State University Raleigh, NC 7695-805 Revised: March 1993 Abstract In

More information

Peter Deuhard. for Symmetric Indenite Linear Systems

Peter Deuhard. for Symmetric Indenite Linear Systems Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993) Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information