KKT Conditions for Rank-Deficient Nonlinear Least-Square Problems with Rank-Deficient Nonlinear Constraints1

Size: px
Start display at page:

Download "KKT Conditions for Rank-Deficient Nonlinear Least-Square Problems with Rank-Deficient Nonlinear Constraints1"

Transcription

1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 100, No. 1. pp , JANUARY 1999 KKT Conditions for Rank-Deficient Nonlinear Least-Square Problems with Rank-Deficient Nonlinear Constraints1 M. GULLIKSSON2 Communicated by C. G. Broyden Abstract. In nonlinear least-square problems with nonlinear constraints, the function (l/2) f2(x) 22, wheref2 is a nonlinear vector function, is to be minimized subject to the nonlinear constraints f1, (x) = 0. This problem is ill-posed if the first-order KKT conditions do not define a locally unique solution. We show that the problem is ill-posed if either the Jacobian off1 or the Jacobian of J is rank-deficient (i.e., not of full rank) in a neighborhood of a solution satisfying the first-order KKT conditions. Either of these ill-posed cases makes it impossible to use a standard Gauss-Newton method. Therefore, we formulate a constrained least-norm problem that can be used when either of these ill-posed cases occur. By using the constant-rank theorem, we derive the necessary and sufficient conditions for a local minimum of this minimum-norm problem. The results given here are crucial for deriving methods solving the rank-deficient problem. Key Words. Nonlinear least squares, optimization, regularization, KKT conditions, rank-deficient nonlinear constraints, rank-deficient nonlinear least-square problems. 1. Introduction We will consider an important special case of ill-posed nonlinear leastsquare problems with nonlinear equality constraints. Let us first consider the unconstrained least-square problem 1The author thanks the reviewers for suggestions improving the manuscript considerably. 2Assistant Professor, Department of Computing Science, Umea University, Umea, Sweden /99/0100-OI45S16.00/0 c 1999 Plenum Publishing Corporation

2 146 JOTA: VOL. 100, NO. 1, JANUARY 1999 where/: R n -»R m is at least twice continuously differentiable and \\-\\2 is the 2-norm. The first-order KKT condition for (1) is where J=df/dx is the Jacobian of fa solution x to (2) will be called a critical point. For clarity, we sometimes denote functions or derivatives evaluated at x with a hat, e.g., J=J(x). An important case of an ill-posed nonlinear least-square problem is when J is of rank r < n in a neighborhood of the critical point x. This will be the main assumption in this paper but will not always be stated explicitly. Examples of rank-deficient problems are underdetermined problems (Ref. 1), nonlinear regression problems (Ref. 2), nonlinear total least-square problems (Ref. 3), and artificial neural networks (Ref. 4). Note that all these problems may have nonlinear rank-deficient constraints. Another equally important reason for looking at rank-deficient problems is the connection with regularization (Ref. 5). For Tikhonov regularization, it is often the case that the problem solved in the limit is a minimum-norm problem of the kind that we analyze here (Ref. 6). The following theorem characterizes a problem that has a rank-deficient Jacobian in a neighborhood of a critical point. The theorem is in fact a corollary of Theorem 1.2 below and can be found in Ref. 7. Theorem 1.1. Let x be a critical point, and let the rank of J be equal to r<n in a neighborhood of x. Then, Vl x F(x) is a matrix of rank r <n with its nullspace containing the nullspace of J(x). We may conclude that having J rank-deficient makes (1) an ill-posed problem in the sense that (2) does not have a unique solution [though a local minimum to (1) may exist]. Consider now the nonlinear least-square problem with nonlinear constraints. We formulate this problem as wheref 1, : R n ->R m andf 2 : R n R m2, with for the sake of simplicity. For notational convenience, we define

3 JOTA: VOL. 100, NO. 1, JANUARY with The first-order KKT conditions for this problem read We will call a solution to (4) a critical point. As for the unconstrained problem, we assume that J is rank-deficient in a neighborhood of the critical point of interest. We will also analyze the case when J 1 is of rank s<m1 in a neighborhood of a critical point. It is easy to state the KKT conditions when both J, J\ have full rank in a neighborhood of the solution. If either J or J1 is not of full rank, we say that the problem is ill-posed. We will motivate this statement further before going into the different problem reformulations. It is natural to consider the constrained problem (3) ill-posed if (4) does not have a locally unique solution. This will be the case if the matrix is singular. Here, we have introduced the operator O defined as for yer m and g: R n ->R m a twice continuously differentiate function. We have the following lemma. Lemma 1.1. Define ^v,) as the projection on the nullspace of the Jacobian of f1, J1. The matrix K in (5) is singular if and only if j T 1 or awovl^'^cr,) is rank-deficient. Proof. The if-part is proved by considering giving and thus which by projection on Jf (J\) proves the if-part. For the converse statement, we assume that J T 1 has full rank (J T 1 rankdeficient is trivial) and that &#-(j^2xx& #-M) has not full rank. Then, we

4 148 JOTA: VOL. 100, NO. 1, JANUARY 1999 have to show that the matrix has a nontrivial nullspace; i.e., there exists y^o such that It is always possible to choose ye jv(j1), and since VL-S^^W1 > has a nontrivial nullspace, the lemma is proved. D We will assume that m\<r. This assumption may be regarded as a constraint qualification (see Lemma 2.1) when J is rank-deficient and does not appear to be a severe restriction in practice. The assumption is used implicitly in the following theorem that we will prove in Section 2.2. Theorem 1.2. Assume that J is rank-deficient in a neighborhood of a critical point. Then, VL^f in (5) is singular with jv(j) c Jf(VL-S?) and ^(VL^) = ^(^r). Moreover, ftr(j,1)v2xx&0^<1t, (and thus K) is singular with a nullspace in ^V(J1) n ^(J). Theorem 1.2 makes it clear that J or J\ rank-deficient in a neighborhood of a critical point gives an ill-posed problem. Now, we turn to the question of reformulating our problems. In the unconstrained case, it is natural to find the minimum-norm solution when J is rank-deficient since it is of interest that the solution is of reasonable size with a residual as small as possible. Therefore, we may use the minimumnorm problem In Ref. 7, necessary and sufficient conditions for a local minimum of this minimum-norm problem are derived. The center xc is chosen from a priori information and should ideally be an approximation of the solution. One possible extension of the unconstrained problem to the rankdeficient constrained problem is to consider

5 JOTA: VOL. 100, NO. 1, JANUARY Problem (6) is to be understood as minimizing \\x-x c 2, where x is in the solution set of problem (3). If in addition the constraints are ill-posed in the sense that J 1, is rankdeficient in a neighborhood around a critical point, we formulate the problem as Again these three minimization problems are to be thought of as finding the minimum distance to x c, subject to x being in the solution set of the two inner minimization problems. For f 1 and f 2 both linear, a formula for the least-norm solution of (7) with x c = 0 is given in Ref. 8. To the best of our knowledge, the nonlinear problem has not been treated elsewhere. 2. Constrained Full-Rank Case In this section, we consider problem (6). We will assume that only J is rank-deficient and J\ is of full rank in a neighborhood of a critical point x. Consequently, problem (6) is more or less a straightforward generalization of the unconstrained case inheriting the same type of rank deficiency. The constant-rank theorem (Ref. 9) implies that there exists functions h: R r -»R m and z: R n -»R r such lhatf(x) = h(z(x)), with rank(sfi/sz) = rank(8z/dx) = r in a neighborhood of x. Using this representation in (6) and assuming that J\ has full rank gives This problem decouples into

6 150 JOTA: VOL. 100, NO. 1, JANUARY 1999 with a solution z, and If (9) is going to be a meaningful problem, it is necessary to add a constraint qualification. It seems natural to make the standard assumption that dh\ /dz has full row rank. In the following lemma, the implications of this assumption are stated. Lemma 2.1. Assume that J has rank r<n and fi(x) = h1(z(x)) is attained from the constant rank theorem. If dh\ /dz has full row rank then m1 < r and N(J1) ^ 0. Proof. Since the constant rank theorem tells us that dh1 /dz has full rank it is necessary that m\<r. From the chain rule it is also seen that dh{/dz = 11V1 with 9t(V^ = 9t(3T}. We see that dh1/dz have a nonempty nullspace if and only if ^(J{1n &(JT) ^ 0. D 2.1. Necessary Conditions for Local Minimum. We have the following theorem. Theorem 2.1. Let f=[f1 ;f2]: Rn->Rm be a function whose Jacobian J is of rank r<,n, and assume that f1 : Rn-»Rm' has a Jacobian J\ of rank m\ in a neighborhood of x. Then, a necessary ^condition for (6) to have a local minimum at x is that there exist vectors Ai and y such that Proof. We use the two problems (9) and (10). A necessary condition for (9) to have a local minimum is that h\=0 and the gradient of the Lagrangian is zero, i.e.,

7 From the chain rule, we get JOTA: VOL. 100, NO. 1, JANUARY and a necessary condition is then which is the second condition given in the theorem with h=f, A1 = A1 at x. A necessary first-order condition for (10) to have a local minimum is that or From (12), we have that M(JT) = M((dz/dx)T), proving the third condition in the theorem. We may add that it is also possible to prove this statement by looking at the necessary condition for We then attain the condition According to Theorem 1.2, JT2 J2+ tfo/i'+/tof 2) has a range space containing &(JT) and the statement is proved again. D There is more to say about the structure of the constrained problem than the theorem reveals. Consider the problem We can define the Lagrange function as and the gradient is giving the second condition as in the proof of Theorem 2.1. We will use this formulation in the next section.

8 152 JOTA: VOL. 100, NO. 1, JANUARY Sufficient Conditions. In order to derive the sufficient conditions, we state the following lemma. Lemma 2.2. If= h(z(x)) e R m with J of rank r < n and y er m, then Proof. From the chain rule, we have that where e i is the ith column in the identity matrix. The statement is easily attained from this by looking at y T Of". D The following two corollaries are a direct consequence of Lemma 2.2. Corollary 2.1. If J T y = 0, then Corollary 2.2. If J T y = 0 and f, y are partitioned as in (7), then Consider (13) again. From the lemmas above, it is seen that the Hessian of the Lagrange function is From the second necessary condition in Theorem 1.1, we have that or by the properties of the operator O,

9 JOTA: VOL. 100, NO. 1, JANUARY Thus, V2xxy is a matrix with rank r and with a nulispace containing the nullspace of Bz/dx just as in the unconstrained case (Ref. 7). We have also proved Theorem 1.2. For attaining the sufficient condition for a local minimum, we must restrict the Hessian of the Lagrange function to Jf(J)L = &(JT), since any part in Jf(J) will give no information [locally, z(x+p) is constant and (dz/ 8x)p is zero for pe^(j)]. We then restrict our attention to and remaining in St(Jr), project this matrix on the nullspace of J1V1. If we define a matrix ZeRrx<r~mi) that spans the nullspace of J1V1, we get the projected Hessian of the Lagrange function to be Another way of formulating the analysis above is to consider the Taylor expansion of y(x+p, A,). This expansion has a second-order term ptv*xyp. Since we know that p must be in &(JT), we have We also know that p is in the nullspace of J1, that is, which gives with Z a matrix that spans the nullspace of J1 V1. Thus, the second-order term can be written as above. We summarize this in a theorem characterizing the local minimum of problem (13). Theorem 2.2. Assume that V1 is a matrix with Si(1\) = 9t(JT) = ^T(J)1 and Z is a matrix with &(Z) = Jf(J1 V1). If x is a local minimum to the problem (13), then the two first conditions in Theorem 2.1 are satisfied and the matrix where

10 154 JOTA: VOL. 100, NO. 1, JANUARY 1999 is positive semidefinite. Conversely, if the two conditions in Theorem 2.1 are satisfied and W1 is positive definite, then x is a local minimum to problem (13). We can now extend the previous theorem to the general minimumnorm problem (8). Theorem 2.3. Assume that V\, V2 are matrices with M(V1) = 3t(JT) = ^(J)1, St(V2) = ^(J) and Z is a matrix with ^(Z) = ^(y1k1). If x is a local minimum to the problem (8), then the conditions in Theorem 2.1 are satisfied, the matrices are positive semidefinite, and Conversely, if the conditions in Theorem 2.1 are satisfied and W1, W2 are positive definite, then x is a local minimum to problem (8). Proof. The original constrained problem decouples in the two problems (9) and (10). The results on W2 are derived from problem (10). It is easily seen that the Hessian of the Lagrangian for this problem is In- yr O z", where This matrix is to be projected ^(S) = Jf(J), i.e., K2(/n- yr Q z") V2. Using Lemma 2.2, with we get the condition on W2. Analyzing (9), we determine the Hessian of the Lagrangian in (14) and attain or since S has full rank,

11 JOTA: VOL. 100, NO. 1, JANUARY With y = [1i; H2 ] and using the necessary condition we have from Lemma 2.2 that or with V1 defined in the theorem. The Hessian of the Lagrange function is to be projected onto the nullspace of dh\ /dz. From we get that and the nullspace of dh1 /dz is seen to be the nullspace of a matrix 1\ 1\. The projected Hessian of the Lagrange function is then with^(z) = ^"(J111,1. D 3. Rank-Deficient Constraints Rank-deficient constraints really do not complicate the problem very much. We assume that J1 has rank s <,p and the natural problem formulation is given in (7). Again, we can use the powerful constant-rank theorem to formulate the following lemma. Lemma 3.1. Assume that J1 has rank s<p and that/1 (x) =1h, (z(x)), where rank(3z/cbc) = r. Then, d1\/dz has rank s and there exist functions c:rs-^rp and d: Rr-Rs, whose Jacobians are of full rank, such that h1(z) = c(d(z)\ Proof. From the chain rule we get Since dz/dx has rank r and df1 /dx have rank s, we get that d1\ /dz has rank s. From the constant-rank theorem, we then get h\ (z) = c(d(z)). D

12 156 JOTA: VOL. 100, NO. 1, JANUARY 1999 Using the lemma we can formulate the constrained problem as Problem (16) can be solved at three levels. First we have d as the solution of and the inner minimization problem becomes This problem decouples into with a solution, and the final problem is again 3.1. Necessary Conditions. One possible formulation of the necessary conditions for a local minimum is to use (7). Theorem 3.1. A necessary condition for problem (7) to have a minimum at x is that there exists vectors AI, y such that Proof. The first condition comes from problem (17). This problem has as a necessary condition for a local minimum that

13 From the chain rule, we get JOTA: VOL. 100, NO. 1, JANUARY and since Sd/dz, dz/dx both have full rank, we get J 1 T f 1 = 0 at x. To prove the second condition, we use problem (19). A necessary condition for (19) to have a local minimum is that d(z) = d and the gradient of the Lagrangian is zero, i.e., By defining and using the chain rule again, we get which is the wanted second condition at x. The proof of the last condition is given by the necessary conditions for problem (20) and is found in the proof of Theorem 2.1. D 3.2. Sufficient Conditions. By using the chain rule on f\(x) = c(d(z(x))), we get the following lemma. Lemma 3.2. Assume that f\ (x) = c(d(z(x))) and J T 1y = 0. Then, We have a similar lemma for f 1 (z(x)) = h 1 (z(x)). Lemma 3.3. J T 1y = 0, then Finally, we have a lemma for h 1 (z) = c(d(z)). Lemma 3.4. Assume that h 1 (z) = c(d(z)) and that Z is a matrix such that &(Z} = Jf(J 1 V 1 ). Then,

14 158 JOTA: VOL. 100, NO. 1, JANUARY 1999 Proof. From the chain rule, we get giving the lemma by looking at A1 Qf'1 and using the fact that the nullspace of dc/dz contains the nullspace of J\V\. D We are now able to prove the following theorem. Theorem 3.2. Assume that U1 is a matrix such that $?(U1) = ^(/1), V1,V2 are matrices with #(K1) = &(JT} = ^(J)L, (V2) = Jf(J\ and Z is a matrix with 3t(Z) = ~V(J1 V1). If x is a local minimum to the problem (8), then the conditions in Theorem 2.1 are satisfied and the matrices where are positive semidefinite. Conversely, if the conditions in Theorem 2.1 are satisfied and W\, W2 are positive definite, then x is a local minimum to problem (8). Proof. For attaining the first condition, we consider problem (17). A necessary condition for a local minimum is that the Hessian is positive semidefinite. From the chain rule, we get and then Furthermore, we have that

15 JOTA: VOL. 100, NO. 1, JANUARY giving us Defining U1 as above, we get The first condition is attained by using Lemma 3.2 with y=1\. For the second condition, we introduce the Lagrange function and The first two terms have been analyzed before and give Further, with Z defined in the theorem, we have from Lemma 3.3 and Lemma 3.4 The last condition on W3, is proved exactly as in the proof of Theorem 2.3. D Comparing the result of this theorem to the unconstrained case, one may be surprised that there is no condition containing Jf(J\1 corresponding to the matrix V2 in W3. However, the curvature in N(J\1 is taken care of in W2, since this matrix contains the second-order information in c restricted to -/r(j1). 4. Conclusions We have presented a formulation of the nonlinear least-square problem with nonlinear constraints that can be used to find a minimum-norm solution in the ill-posed case where either J or J1 is rank-deficient in a neighborhood of a critical point. Necessary and sufficient conditions for a local minimum have been derived. These conditions are easy to verify and can be used as a basis for constructing methods that can solve the ill-posed problem.

16 160 JOTA: VOL. 100, NO. 1, JANUARY 1999 References 1. WALKER, H. F., Newton-Like Methods for Underdetermined Systems, Lectures in Applied Mathematics, Vol. 26, pp , BATES, D., and WATTS, D., Nonlinear Regression Analysis and Its Applications, John Wiley, New York, New York, VAN HUFFEL, S., and VANDEWALLE, J., The Total Least-Square Problem: Computational Aspects and Analysis, SIAM, Philadelphia, Pennsylvania, ERIKSSON, J., GULLIKSSON, M., LINDSTROM, P., and WEDIN, P. A., Regularization Tools for Training Feed-Forward Neural Networks, Part 2: Large-Scale Problems, Technical Report UMINF 96.06, Department of Computing Science, Umea University, Umea, Sweden, HANSEN, P. C., Rank-Deficient and Discrete Ill-Posed Problems, Technical Report, Department of Mathematical Modelling, Section for Numerical Analysis, Technical University of Denmark, Lyngby, Denmark, ERIKSSON, J., and GULLIKSSON, M., Local Results for the Gauss-Newton Method on Constrained Exactly Rank-Deficient Nonlinear Least Squares, Technical Report UMINF 97.12, Department of Computing Science, Umea University, Umea, Sweden, ERIKSSON, J., Optimization and Regularization of Nonlinear Least-Square Problems, Technical Report UMINF (PhD Thesis), Department of Computing Science, Umea University, Umea, Sweden, HANSON, R. J., and LAWSON, C. L., Solving Least-Square Problems, Prentice Hall, Englewood Cliffs, New Jersey, CONLAR, L., Differential Manifolds: A First Course, Birkhauser Advanced Texts, Boston, Massachusetts, 1993.

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1. Vectors, Matrices, and Linear Spaces 1.6 Homogeneous Systems, Subspaces and Bases 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.6. Homogeneous Systems, Subspaces and Bases Note. In this section we explore the structure of the solution

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

Numerical solution of Least Squares Problems 1/32

Numerical solution of Least Squares Problems 1/32 Numerical solution of Least Squares Problems 1/32 Linear Least Squares Problems Suppose that we have a matrix A of the size m n and the vector b of the size m 1. The linear least square problem is to find

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second

More information

GLOBALLY CONVERGENT GAUSS-NEWTON METHODS

GLOBALLY CONVERGENT GAUSS-NEWTON METHODS GLOBALLY CONVERGENT GAUSS-NEWTON METHODS by C. Fraley TECHNICAL REPORT No. 200 March 1991 Department of Statistics, GN-22 University of Washington Seattle, Washington 98195 USA Globally Convergent Gauss-Newton

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

Mechanical Systems II. Method of Lagrange Multipliers

Mechanical Systems II. Method of Lagrange Multipliers Mechanical Systems II. Method of Lagrange Multipliers Rafael Wisniewski Aalborg University Abstract. So far our approach to classical mechanics was limited to finding a critical point of a certain functional.

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

ENSIEEHT-IRIT, 2, rue Camichel, Toulouse (France) LMS SAMTECH, A Siemens Business,15-16, Lower Park Row, BS1 5BN Bristol (UK)

ENSIEEHT-IRIT, 2, rue Camichel, Toulouse (France) LMS SAMTECH, A Siemens Business,15-16, Lower Park Row, BS1 5BN Bristol (UK) Quasi-Newton updates with weighted secant equations by. Gratton, V. Malmedy and Ph. L. oint Report NAXY-09-203 6 October 203 0.5 0 0.5 0.5 0 0.5 ENIEEH-IRI, 2, rue Camichel, 3000 oulouse France LM AMECH,

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

Convergence Analysis of Perturbed Feasible Descent Methods 1

Convergence Analysis of Perturbed Feasible Descent Methods 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS Vol. 93. No 2. pp. 337-353. MAY 1997 Convergence Analysis of Perturbed Feasible Descent Methods 1 M. V. SOLODOV 2 Communicated by Z. Q. Luo Abstract. We

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R"

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: VoL 50, No. 1, JULY 1986 TECHNICAL NOTE A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R" C. M I C H E L O T I Communicated

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 24, No. 1, pp. 150 164 c 2002 Society for Industrial and Applied Mathematics VARIANTS OF THE GREVILLE FORMULA WITH APPLICATIONS TO EXACT RECURSIVE LEAST SQUARES JIE ZHOU,

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input, MATH 20550 The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input, F(x 1,, x n ) F 1 (x 1,, x n ),, F m (x 1,, x n ) where each

More information

Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake

Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake Seung Keun Park and Hae Sung Lee ABSTRACT This paper presents a system identification (SI) scheme

More information

Lecture 14: October 17

Lecture 14: October 17 1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

The Newton-Raphson Algorithm

The Newton-Raphson Algorithm The Newton-Raphson Algorithm David Allen University of Kentucky January 31, 2013 1 The Newton-Raphson Algorithm The Newton-Raphson algorithm, also called Newton s method, is a method for finding the minimum

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages )

Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages ) Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages 57-6) Recall that the basis for a subspace S is a set of vectors that both spans S and is linearly independent. Moreover, we saw in section 2.3 that

More information

AN EXPLICIT FORMULA OF HESSIAN DETERMINANTS OF COMPOSITE FUNCTIONS AND ITS APPLICATIONS

AN EXPLICIT FORMULA OF HESSIAN DETERMINANTS OF COMPOSITE FUNCTIONS AND ITS APPLICATIONS Kragujevac Journal of Mathematics Volume 36 Number 1 (2012), Pages 27 39 AN EXPLICIT FORMULA OF HESSIAN DETERMINANTS OF COMPOSITE FUNCTIONS AND ITS APPLICATIONS BANG-YEN CHEN Abstract The determinants

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

The Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof:

The Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof: Applied Math 205 Homework 1 now posted. Due 5 PM on September 26. Last time: piecewise polynomial interpolation, least-squares fitting Today: least-squares, nonlinear least-squares The Normal Equations

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

The effect of calmness on the solution set of systems of nonlinear equations

The effect of calmness on the solution set of systems of nonlinear equations Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

Mathematical Constraint on Functions with Continuous Second Partial Derivatives

Mathematical Constraint on Functions with Continuous Second Partial Derivatives 1 Mathematical Constraint on Functions with Continuous Second Partial Derivatives J.D. Franson Physics Department, University of Maryland, Baltimore County, Baltimore, MD 15 Abstract A new integral identity

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS DANIEL B. SZYLD Department of Mathematics Temple University Philadelphia, Pennsylvania 19122-2585 USA (szyld@euclid.math.temple.edu)

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Preliminary draft only: please check for final version

Preliminary draft only: please check for final version ARE211, Fall215 CALCULUS3: TUE, SEP 15, 215 PRINTED: AUGUST 25, 215 (LEC# 6) Contents 2. Univariate and Multivariate Differentiation (cont) 1 2.4. Multivariate Calculus: functions from R n to R m 1 2.5.

More information

2 Nonlinear least squares algorithms

2 Nonlinear least squares algorithms 1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

A Sufficient Condition for Local Controllability

A Sufficient Condition for Local Controllability A Sufficient Condition for Local Controllability of Nonlinear Systems Along Closed Orbits Kwanghee Nam and Aristotle Arapostathis Abstract We present a computable sufficient condition to determine local

More information

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,

More information

Lecture 7 Unconstrained nonlinear programming

Lecture 7 Unconstrained nonlinear programming Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

On the Optimal Insulation of Conductors 1

On the Optimal Insulation of Conductors 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 100, No. 2, pp. 253-263, FEBRUARY 1999 On the Optimal Insulation of Conductors 1 S. J. COX, 2 B. KAWOHL, 3 AND P. X. UHLIG 4 Communicated by K. A.

More information

Quaternion Data Fusion

Quaternion Data Fusion Quaternion Data Fusion Yang Cheng Mississippi State University, Mississippi State, MS 39762-5501, USA William D. Banas and John L. Crassidis University at Buffalo, State University of New York, Buffalo,

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information