A null space method for solving system of equations q

Size: px
Start display at page:

Download "A null space method for solving system of equations q"

Transcription

1 Applied Mathematics and Computation 149 (004) A null space method for solving system of equations q Pu-yan Nie 1 Department of Mathematics, Jinan University, Guangzhou 51063, PR China Abstract We transform the system of nonlinear equations into a nonlinear programming problem, which is solved by null space algorithms. We do not use standard least square approach. We divide the equations into two groups. One group contains the equations that are treated as equality constraints. The square of other equations is regarded as objective function. Two groups are updated in every step. In essence, two different methods are used for a system of equations in an algorithm. Ó 003 Elsevier Inc. All rights reserved. Keywords: Null space methods; Equality constraints; Nonlinear system of equations; Nonlinear programming; Global convergence 1. Introduction In this paper, we consider the system of nonlinear equations of the following form c i ðxþ ¼0 i ¼ 1; ;...; m; ð1:1þ where x R n. When (1.1) is solved by iterative methods, we use x k ; k ¼ 1; ;... to denote the successive iterates. An important method is based on successive q Supported partially by Chinese NNSF grants and the knowledge innovation program of CAS. address: pynie00@hotmail.com (P.-y. Nie). 1 State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and System Sciences, Chinese Academy of Sciences, P.O. Box 719, Beijing , PR China /$ - see front matter Ó 003 Elsevier Inc. All rights reserved. doi: /s (03)

2 16 P.-y. Nie / Appl. Math. Comput. 149 (004) 15 6 linearization, in which d k is calculated on iteration k that solves the system of linear equations c ðkþ i þða ðkþ i Þ T d ¼ 0 i ¼ 1; ;...; m; ð1:þ where c ðkþ i ¼ c i ðx k Þ; a ðkþ i ¼frc i ðx k Þg ði ¼ 1; ;...; mþ. Ifm ¼ n (1.) is NewtonÕs method. NewtonÕs method has local second order convergence near a regular solution. But in some cases (1.) is inconsistent. An alternative approach is to pose (1.1) as a minimization problem minimize hðxþ ¼cðxÞ T cðxþ: ð1:3þ This problem can also be solved by successive linearization. This idea helps to improve the global properties of NewtonÕs method. But there are still potential difficulties. Powell [7] gives an example that the iterates converges to a nonstationary point of hðxþ, which is obviously unsatisfactory. Therefore, we consider to build a constrained optimization problem to solve (1.1). Namely, we divide the set f1; ;...; mg into two parts S 1 and S, where S denotes the complement f1; ;...; mg=s 1. Thus, our problem becomes X minimize c i ðxþ is 1 ð1:4þ subject to c j ðxþ ¼0; j S : The choice of S 1 and S will be given behind our algorithm in Section of this paper. When (1.1) is infeasible, a local minimization of hðxþ > 0 is found or a point is located at which the linearized system (1.) is infeasible, which is regarded as local infeasibility. Just as the global minimization of hðxþ, it is very difficult to describe global infeasibility. There are rare paper devoting to using optimization method to solve nonlinear system of equations. In this paper, we try to use optimal methods to work out (1.1). The paper is organized as follows: In Section, an algorithm is put forward. The algorithm is analyzed in Section 3. Some numerical results and remarks are listed in Section 4.. A null space algorithm There exist extensive research about null space methods [,4,6,9,10] for nonlinear programming. But we hope to attack nonlinear system of equations with null space methods in this work. In other words, we aim to solve (1.1) based on (1.4). At the beginning of the kth iteration, we hope to find a better point. In a null space technique, a step is consisted of two component, namely, a null space step and a range space step. The null space step (or normal) is obtained by solving the subproblem

3 P.-y. Nie / Appl. Math. Comput. 149 (004) minimize subject to ka T S h þ c S ðx k Þk khk 6 nd; ð:1þ where n ð0; 1Þ is a relaxation parameter, c S ðxþ is the vector of c j ðxþ for j S and A S ¼rc S ðx k Þ T. The range (vertical) step is based on the following subproblem minimize W k ðh k þ Z k vþ¼ 1 ðz kv þ h k Þ T B k ðz k v þ h k Þþg T k ðz kv þ h k Þ subject to kvk 6 D; ð:þ where A k Z k ¼ 0 and Zk TZ k ¼ I, B k is Hessian or approximate Hessian matrix, ðg k Þ i ¼ o P is 1 c i ðxþ=ox i and h k is the solution of (.1). In (.1) and (.) -norm or some other norm is used. We define the new point as x k ðdþ ¼x k þ Z k v k þ h k : When a new point is accepted, the sets S 1 and S are updated by some strategy. Then, to solve (1.1) we work out (1.4) using null space method. The step is obtained by (.1) and (.). In order to decide whether to use or deny the trial point, some criterion is used. Thus, the following merit function is used, which is similar to that in [5]. m k ðxþ ¼ X is 1 c i ðxþ þ r k kc S ðxþk; ð:3þ where r k > 0 is a parameter. Therefore, we define the predicted reduction and actual reduction to judge the new trial point as follows. aredðs k ; r k Þ¼ ðm k ðx k þ s k Þ m k ðx k ÞÞ; ð:4þ predðs k ; r k Þ¼ 1 st k B ks k þ g T k s k þ r k ka T S s k þ c S ðx k Þk r k kc S ðx k Þk : ð:5þ Algorithm 1 (Null space algorithm for system of equations) Step 0. Choose D 0 ; x 0 ; g;;c; n; S k 1 ; Sk where D 0 > 0; 0 < g < 1; D max P D 0, compute g 0 ; k 0 ; c i ðx 0 Þ; A k, i S k. r 0 ¼ r 1 P 1; ^r > 0. Set k :¼ 0. Step 1. If kc k k 6 then stop. Otherwise compute Z k. Step. If c i ðx k Þ¼0; i S. Then go to Step 3. Or else, compute (.1) to get h k. Step 3. Compute (.) to get v k.ifv k ¼ 0 and h k ¼ 0 then stop. Step 4. Set s k ¼ h k þ Z k v k : Compute the actual reduction aredðd k Þ and the predicted reduction predðd k ; r k 1 Þ.

4 18 P.-y. Nie / Appl. Math. Comput. 149 (004) 15 6 Step 5. If predðs k ; r k 1 Þ P ðr k 1 =Þðkc S k kc S þ A S s k kþ; then set r k ¼ r k 1. Otherwise set r k ¼ s T k B ks k þ g T k s k ðkc S k kc S þ A S s k kþ þ ^r: ð:6þ Step 6. If r ¼ aredðs k ; r k Þ=predðs k ; r k Þ P g then set x kþ1 ¼ x k þ s k and D max P D kþ1 P D k ; or else set x kþ1 ¼ x k and D kþ1 6 D k. Step 7. Compute g kþ1 ; B kþ1 ; A kþ1 and S kþ1 1 ; S kþ1. Let k ¼ k þ 1 and go to Step 1. In the above algorithm, A k Z k ¼ 0andZ T k Z k ¼ I. For the sets S k 1 and Sk,we can define them from null space technique. In a null space algorithm, Yuan [11] points out that: the null space step is the NewtonÕs step while the vertical step is a quasi-newtonõs step if we use some kind of quasi-newton method to update B k. Thus, there will appear one-fast one-slow pattern. Yuan [11] uses two vertical components and one null space component as a step. Now we discuss the sets S 1 and S. For convenience, we denote jc i1 j 6 jc i j 6 6 jc im j; ð:7þ where fi 1 ; i ;...; i m g is a permutation of f1; ;...; mg. In this work, we define S 1 ¼fi tþ1 ;...; i m g; S ¼fi 1 ; i ;...; i t g; ð:8þ ð:9þ where 1 6 t < m is a constant integer. we choose S 1 and S in such way that the second information of some equations with large residual are made full use of. Certainly, there are other choices of S 1 and S. For example, S 1 ¼fi 1 ; i ;...; i t g; S ¼fi tþ1 ;...; i m g: ð:10þ ð:11þ But we do not analyze the properties under (.10) and (.11). Hence, the results in Section 3 is based on (.8) and (.9). There are some advantages about optimization techniques to solve system of equations. Firstly, it provides another way of getting a local infeasibility point. When a point lies near a local infeasibility point, it may be very slow to find a local infeasibility point. But it can be found immediately with optimization method because some second information is facilitated. Secondly, when the equations have different properties to a great degree, it will help to balance them to a certain degree. Thirdly, this is a method without Lagrangian multiplier and sometime Lagrangian multiplier is problematic. Finally, because there are two methods for null space step and range step, it is the first time to use two different methods to solve (1.1) in essence.

5 P.-y. Nie / Appl. Math. Comput. 149 (004) Convergence properties of null space algorithm Null space method has global convergence property and local superlinear convergence. We hope that there are some good results for our algorithm. In order to give the global convergence we make some assumptions as follows, which is called standard assumptions. Assumption 1 (1) fx k gx and X is nonempty and bounded; () c i ðxþ ði ¼ 1; ;...; mþ are twice continuously differentiable on an open set containing X ; (3) the matrix sequence fb k g is bounded. For convenience, we define the local infeasibility point of (1.1). Definition 1. If x H is a local minimization of (1.4) but not the solution of (1.1), we call x H a local infeasibility point of (1.1). Then, we analyze the properties to Algorithm 1 based on the above assumptions. When the algorithm terminates finitely, a solution of (1.1) or a local infeasibility point is obtained. It is apparent that the following result is true. Theorem 1. If algorithm terminates at Step 1, then a solution to (1.1) is found. If the algorithm terminates at Step 3, then a local infeasibility point is obtained. Proof. The first part is obvious. When the algorithm terminates at Step 3, the final iterate point is not a solution of (1.1) because it does not satisfy the condition of Step 1. But it satisfies the KKT conditions. Therefore, it is a local minimization of (1.4). Hence, it is a local infeasibility point. Of course, in our algorithm there are chances to terminate at Step 3 because the choice of S 1 and S. But if we give an alternative choice about S 1 and S there is little possibility. When the algorithm terminates infinitely, we investigate the point sequence. Certainly, we assume that the solutions to (.1) and (.) satisfy certain descent conditions. Assumption (1) A T S has full column rank, Z k is bounded for all k, A þ S is uniformly bounded. () The solution to (.1) and (.) satisfy Cauchy descent condition, namely, kc S k ka S h k þ c S k P s 1 kc S k minfkc S k; nd k g; ð3:1þ

6 0 P.-y. Nie / Appl. Math. Comput. 149 (004) 15 6 W k ðh k Þ W k ðh k þ Z k v k Þ P s kz k g k k minfs 3 kz k g k k; D k g; ð3:þ where s 1 ; s ; s 3 are positive constants independent of k. (3) kh k k 6 s 4 kc S k, where s 4 is a positive constant independent of k. () is a weaker condition because Cauchy point satisfies it. Further, there are many algorithms satisfy (3) under (1) (see [3]). Hence, the above assumptions are rational. Then, we estimate the predicted reduction to get some results about our algorithm. Lemma 1. The sequence r k and predicted reduction satisfy r k P r k 1 P 1; predðs k ; r k Þ P r k ðkc S k kc S þ A S s k kþ; ð3:3þ W k ð0þ W k ðh k Þ P s 5 minfkc S k; nd k g; ð3:4þ where s 5 is a positive constant independent of k. Proof. The first inequality and (3.3) is obtained from Algorithm 1 directly. Then, we show (3.4). From Assumption we have W k ð0þ W k ðh k Þ¼ g T k h k h T k B kh k ¼ ðg T k þ ht k B kþh k P kg T k þ ht k B kkkh k k P s 6 kh k k P s 6 s 4 minfkc S k; nd k g ¼ s 5 minfkc S k; nd k g; where s 6 is a positive constant independent of k, which is the result. It is apparent that r k is monotonic increasing sequence. Then, we show that r k is a bounded sequence. Lemma. fr k g is a sequence generated by our algorithm. Then, r k ¼ r for sufficiently large k. Proof. If predðs k ; r k 1 Þ P ðr k 1 =Þðkc S k kc S þ A T S s k kþ, we have r k ¼ r k 1. Otherwise, by definition (.5) we have r k 1 ðkc S k kc S þ A S s k kþ 6 1 st k B ks k þ g T s k ¼ W k ðs k Þ W k ð0þ 6 W k ðh k Þ W k ð0þ 6 s 5 minfkc S k; nd k g: ð3:5þ

7 P.-y. Nie / Appl. Math. Comput. 149 (004) Inequalities (3.5) and (3.1) show that r k 1 6 4s 5 =s 1. Thus, we see that r k is bounded above. Consequently, r k ¼ r for sufficiently large k. Then the convergence result is listed as follows. Theorem. Under Assumptions 1 and, assume that kk is -norm and r P kc S kðx kþ1 Þk þ kc S kþ1ðx kþ1 Þk, then we have lim inf½kc S kþkz k g k k Š¼0: ð3:6þ k!1 Namely, it has an accumulation which is the solution of (1.1) or a local infeasibility point. Proof. If the result were false there should have been 0 > 0 such that kc S kþkz k g k k > 0 ; ð3:7þ for all k. We should prove the following results: (1) m k ðx k Þ is bounded for all k. () There exists k P k 0 such that m k ðx kþ1 Þ P m kþ1 ðx kþ1 Þ and r k ¼ r. (3) predðs k ; r k Þ P s 8 0 minf 0 ; D k g for all k. (1) is obvious because of Assumption 1 and Lemma. Then, we show (). It is obvious that there exists k P k 0 such that r k ¼ r from Lemma. Actually, m k ðx kþ1 Þ¼ P m j¼1 c iðx kþ1 Þ þ rkc S kðx kþ1 Þk P js c k j ðx kþ1 Þ : While m kþ1 ðx kþ1 Þ¼ P m j¼1 c iðx kþ1 Þ þ rkc S kþ1ðx kþ1 Þk P js c kþ1 j ðx kþ1 Þ. According to the definition of S we have kc S kþ1ðx kþ1 Þk 6 kc S kðx kþ1 Þk. Thus, m k ðx kþ1 Þ m kþ1 ðx kþ1 Þ¼rðkc S k þkc S kþ1 ¼ðr ðkc S k ðx kþ1 Þk kc S kþ1 ðx kþ1 ÞkÞðkc S k kc S kþ1ðx kþ1 ÞkÞ P 0: ðx kþ1 Þk þ kc S kþ1 ðx kþ1 ÞkÞ ðkc S kðx kþ1 Þk ðx kþ1 ÞkÞ ðx kþ1 Þk kc S kþ1 ðx kþ1 ÞkÞÞðkc S k ðx kþ1 Þk Hence, () is true, which means that there are only finitely many iterates which violate m k ðx kþ1 Þ P m kþ1 ðx kþ1 Þ. From (3.1) and (3.3) we have predðs k ; rþ P r 4 s 1 minfkc S k; nd k g: ð3:8þ On the other hand,

8 P.-y. Nie / Appl. Math. Comput. 149 (004) 15 6 predðs k ; rþ ¼Wð0Þ Wðs k Þþrðkc S ðx k Þk kc S ðx k ÞþA S s k kþ ¼ Wð0Þ Wðh k ÞþWðh k Þ Wðs k Þ þ rðkc S ðx k Þk kc S ðx k ÞþA S s k kþ P s 5 minfkc S k; nd k gþs kz k g k k minfs 3 kz k g k k; Dg: ð3:9þ Multiples the above inequalities (3.8) and (3.9) by 8s 5 =ð8s 5 þ rs 1 Þ and rs 1 =ð8s 5 þ rs 1 Þ respectively and the sum of the inequalities we obtain predðs k ; rþ P rs 1s 5 minfkc S k; nd k g 8s 5 þ rs 1 þ rs 1 s kz k g k k minfs 3 kz k g k k; Dg: 8s 5 þ rs 1 Thus, (3) is obtained from the above inequality. Then, we show our result about this theorem. For convenience we denote I ¼fkjkPk 0 and r ¼ aredðs k ;r k Þ=predðs k ;r k ÞPgg. Because of above (1) and () we have 1 > X1 ðm k ðx k Þ m k ðx kþ1 ÞÞ P X ðm k ðx k Þ m k ðx kþ1 ÞÞ k¼1 ki P X g predðs k ; r k Þ P X gs 8 0 minf 0 ; D k g: ki ki ð3:10þ Thus, lim k!1 D k ¼ 0. On the other hand, when D k is small enough, aredðs k ; r k Þ¼predðs k ; r k ÞþOðD k Þ: Thus, r ¼ aredðs k; r k Þ=predðs k ; r k Þ P g. Then, D kþ1 P D k, which is contradicted with lim k!1 D k ¼ 0. Therefore, (3.7) is not true and our result is established. In fact, r P kc S k ðx kþ1 Þk þ kc S kþ1ðx kþ1 Þk is a moderate condition, which is very easy to satisfy. We have given the global convergence to Algorithm 1. We do not consider which kind of step but require Cauchy descent condition (3.1) and (3.). In some cases, we get the solution to (1.1). But in other cases, (1.1) has no solutions globally or locally. Now we give a condition that the KKT points to our problem is just the solution to (1.1). Theorem 3. If (1.1) has a solution and the gradients of c i ðxþ are independent, then, a KKT point is just the global minimum. Proof. If x H is a KKT point to (1.4), then X X c i ðx H Þ5 j c i ðx H Þþ k i 5 j c i ðx H Þ¼0; is 1 is for j ¼ 1; ;...; n. Because the gradients of c i ðx H Þ are independent, c i ðx H Þ¼0 for i S 1. Meanwhile, c S ¼ 0 is obvious. Hence, our result holds.

9 It is very difficult to analyze the cases which 5c is dependent. But in some special cases, our algorithm can avoid the bad result. For example, if c i ðxþ 6¼ 0; c j ðxþ 6¼ 0andrc i ; rc j are dependent, the subproblem based on (1.) is inconsistent. But if c i and c j belong S 1 and S respectively, we can find a new point to avoid this bad situation. Now, we analyze its local properties about Algorithm 1. Just as other analysis about local properties, we require the exact solution to (.1) and (.). Firstly, we make some assumptions as follows. Assumption 3 (1) fx k g!x H, where x H is a KKT point, namely c S ðx H Þ¼0 and there exists k H R k such that gðx H Þ Aðx H Þk H ¼ 0: P.-y. Nie / Appl. Math. Comput. 149 (004) ð3:11þ () Aðx H Þ is full column rank; (3) for any d k ¼ Z k v k R n, k 1 dk Td k 6 dk TB kd k 6 k dk Td k where 0 < k 1 6 k are constants and kzk T lim ðb k W H ÞZ k Zk Ts kk ¼ 0; ð3:1þ k!1 ks k k P where ðz H Þ T W H Z H is positive definite and W H ¼r f ðx H Þ k i¼1 ðkh Þ i r c S ðx H Þ, which is the Hessian of Lagrange function. We assume that ks k k!0 from Assumption 3 to our algorithm. Similar to Powell and YuanÕs results [8] we have Theorem 4. There exists an integer k 0 such that D k P ^D > 0; for all k > k 0. ð3:13þ Proof. When ks k k is small enough, from Taylor expansion we have predðs k ; r k Þ¼aredðs k ; r k ÞþOðks k k Þ: ð3:14þ When kh k k¼oðkz k v k kþ or kz k v k k¼oðkh k kþ then predðs k ; r k Þ¼Oðkh k kþ ¼ Oðks k kþ. Therefore, predðs k ; r k Þ=aredðs k ; r k Þ P g when ks k k is small enough. When kh k k¼oðkz k v k kþ, ks k k¼kz k v k þ h k k P k 3 kz k v k k where k 3 > 0 is a constant from Assumption 3. Then, predðs k ; r k Þ¼OðWðh k Þ Wðh k þ Z k v k ÞÞ ¼ OðkZ k v k k Þ: Therefore, predðs k ; r k Þ=aredðs k ; r k Þ P g when ks k k is small enough. Thus, in the both cases, D will not be reduced if ks k k is small enough, which is our results.

10 4 P.-y. Nie / Appl. Math. Comput. 149 (004) 15 6 Theorem 5. Under Assumptions 1 3, assume that S k 1 is not changed when k > k 0. If (3.1) is true, then kx kþ1 x H k lim k!1 kx k 1 x H k ¼ 0: ð3:15þ If kzk T lim ðb k W H Þs k k ¼ 0; k!1 ks k k ð3:16þ then kx kþ1 x H k lim k!1 kx k x H k ¼ 0: ð3:17þ The results is a direct result from null space algorithm. Thus, the locally superlinear convergence result is obtained, which is closed related to the update of Hessian matrix. The assumption, which S 1 and S are unchanged, is reasonable to a certain degree because of the choice of S 1 and S. Meanwhile, in the null space the convergent rate is lower than that in vertical space locally. 4. Conclusion remarks and numerical results We give a null space method to solve (1.1). For the system of nonlinear equations, two different methods are used in an algorithm to work out it. Our analyses are based on the partition (.8) and (.9). We let the number of constraints is constant. An interesting question is how to choose t. Certainly, we can judge from the initial point and initial values. For the extreme cases if the number of constraints is m, our algorithm is a NewtonÕs method to work out (1.1). If the number of constraints to be 0, our algorithm is least square approach. In fact, we know that the convergence result is true if m k ðx kþ1 Þ P m kþ1 ðx kþ1 Þ; ð4:1þ for r k ¼ r. If we change the number of constraints in some ways, the same results will be achieved because (4.1) is satisfied. For example, S 1 ¼fi tk þ1;...; i m g; S ¼fi 1 ; i ;...; i tk g; ð4:þ ð4:3þ and m P t k P t kþ1 P 1. We can use the other kinds of norm, such as 1-norm and 1-norm. Then the problem becomes

11 P.-y. Nie / Appl. Math. Comput. 149 (004) minimize kc S1 ðxþk ð4:4þ subject to c j ðxþ ¼0; j S : If S 1 and S are based on (.8) and (.9), the same results as those in Section 3 can be obtained when r P kc S kðx kþ1 Þk þ kc S kþ1ðx kþ1 Þk. For B k, we can use one-side reduced Hessian update or two-side reduced Hessian update. It is an important issue. But we do not discuss it in detail. Our algorithm is very flexible because we just require Assumptions 1 and to be satisfied. We do not limit to some special algorithm step. For choice of initial trust region radius and update, no specific method is given. Now, we give examples to our algorithm. The first example comes from [7], which converges to a nonstationary point if least square approach is used, and t ¼ 1. Example 1. Consider the problem of finding a solution of nonlinear system x F ðx; yþ ¼ 10x=ðx þ 0:1Þþy ¼ 0 : ð4:5þ 0 The unique solution is ðx H ; y H Þ¼ð0; 0Þ. It has been proved in [7] that, starting from ðx 0 ; y 0 Þ¼ð3; 1Þ, the iterates generated by the least square algorithm converge to z ¼ð1:8016; 0:0000Þ, which is not a stationary point. But using our algorithm, we can get a sequence of points converge to ðx H ; y H Þ. Our is 1.0e 6. The iterate number is 10. The final residual is kf k¼5:5134e 7. The following example appears in [1], which converges to a nonstationary point if NewtonÕs method is used, and t ¼ 1. Example. Consider the problem of finding a solution of nonlinear system x þ y F ðx; yþ ¼ ¼ 0 : ð4:6þ ðx 1Þy 0 The unique solution is ðx H ; y H Þ¼ð0; 0Þ. Let us define the line C ¼fð1; yþ : y Rg: If the starting point ðx 0 ; y 0 ÞC. Then the Newton method for (1.) are confined to C (see [1]). Meanwhile, if we choose S 1 which based on (.8) and (.9), the same situation as that in [1]. But if we choose S 1 and S with (.10) and (.11) initially and then we use (.8) and (.9), the optimal result can be obtained. The number of calculation to function is 8 and the number of calculation to gradient is 6 if the initial point is (1.0,.0). But if we choose the initial point as (1.0, 0.0), the methods based on (1.1) and (1.) will be failed. But if the above technique is used, the optimal solution will be obtained with NF ¼ 4; NG ¼. Certainly, If there are finitely many steps based on (.10) and

12 6 P.-y. Nie / Appl. Math. Comput. 149 (004) 15 6 (.11), the convergent results are obtained with the same methods as our results. But it is interesting to investigate our algorithm when S 1 and S are given by (.10) and (.11). Acknowledgement The author thanks Prof. Yuan Ya-xiang for his beneficial advice which improve this paper to a great degree. References [1] R.H. Byrd, M. Marazzi, J. Nocedal, On the convergence of Newton iterations to nonstationary points. Report OTC 001/01, Optimization Technology Center, Northwestern University, Evanston, Argonne, IL 6008, USA, 001. [] J.E. Dennis Jr., M. El-Alem, M.C. Maciel, A global convergence theory for general trust region based algorithms for equality constrained optimization, SIAM J. Opt. 7 (1997) [3] J.E. Dennis Jr., M. Heinkenschloss, L.U. Vicente, Trust region interior-point SQP algorithms for a class of nonlinear programming problems, SIAM J. Control Opt. 36 (1998) [4] J.E. Dennis Jr., R.B. Schnable, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, Englewood Cliffs, NJ, [5] R. Fletcher, in: Practical Methods of Optimization, Constrained Optimization, Vol., John Wiley and Sons, New York and Toronto, [6] M. Lalee, J. Nocedal, T.D. Plantenga, On the implementation of an algorithm for large-scale equality constrained optimization, SIAM J. Opt. 8 (3) (1998) [7] M.J.D. Powell, A hybrid method for nonlinear equations, in: P. Rabinowitz (Ed.), Numerical Methods for Nonlinear Algebric Equations, Gordon and Breach, London, [8] M.J.D. Powell, Y. Yuan, A recursive quadratic programming algorithm for equality constrained optimization, Math. Prog. 35 (1986) [9] A. Vardi, A trust region algorithm for equality constrained minimization: convergence and implementation, SIAM J. Numer. Anal. (3) (1985) [10] Y. Yuan, An only -step Q-superlinear convergence example for some algorithms that use reduced Hessian approximation, Math. Prog. 3 (1985) [11] Y. Yuan, A null space algorithm for constrained optimization, in: Z.C. Shi et al. (Eds.), Advances in Scientific Computing, Science Press, Beijing, 001, pp

An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization

An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization Applied Mathematical Modelling 31 (2007) 1201 1212 www.elsevier.com/locate/apm An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization Zhibin Zhu *

More information

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

On memory gradient method with trust region for unconstrained optimization*

On memory gradient method with trust region for unconstrained optimization* Numerical Algorithms (2006) 41: 173 196 DOI: 10.1007/s11075-005-9008-0 * Springer 2006 On memory gradient method with trust region for unconstrained optimization* Zhen-Jun Shi a,b and Jie Shen b a College

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong

More information

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function

Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line 7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996)

Analysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996) Analysis of Inexact rust-region Interior-Point SQP Algorithms Matthias Heinkenschloss Luis N. Vicente R95-18 June 1995 (revised April 1996) Department of Computational and Applied Mathematics MS 134 Rice

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU

More information

A new nonmonotone Newton s modification for unconstrained Optimization

A new nonmonotone Newton s modification for unconstrained Optimization A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

On the Convergence of Newton Iterations to Non-Stationary Points Richard H. Byrd Marcelo Marazzi y Jorge Nocedal z April 23, 2001 Report OTC 2001/01 Optimization Technology Center Northwestern University,

More information

Sequential Quadratic Programming Methods

Sequential Quadratic Programming Methods Sequential Quadratic Programming Methods Klaus Schittkowski Ya-xiang Yuan June 30, 2010 Abstract We present a brief review on one of the most powerful methods for solving smooth constrained nonlinear optimization

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

1. Introduction. We consider the general smooth constrained optimization problem:

1. Introduction. We consider the general smooth constrained optimization problem: OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

A new modified Halley method without second derivatives for nonlinear equation

A new modified Halley method without second derivatives for nonlinear equation Applied Mathematics and Computation 189 (2007) 1268 1273 www.elsevier.com/locate/amc A new modified Halley method without second derivatives for nonlinear equation Muhammad Aslam Noor *, Waseem Asghar

More information

1. Introduction In this paper we study a new type of trust region method for solving the unconstrained optimization problem, min f(x): (1.1) x2< n Our

1. Introduction In this paper we study a new type of trust region method for solving the unconstrained optimization problem, min f(x): (1.1) x2< n Our Combining Trust Region and Line Search Techniques Jorge Nocedal y and Ya-xiang Yuan z March 14, 1998 Report OTC 98/04 Optimization Technology Center Abstract We propose an algorithm for nonlinear optimization

More information

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

Possible numbers of ones in 0 1 matrices with a given rank

Possible numbers of ones in 0 1 matrices with a given rank Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai

More information

Qing-Juan Xu & Jin-Bao Jian

Qing-Juan Xu & Jin-Bao Jian A nonlinear norm-relaxed method for finely discretized semi-infinite optimization problems Qing-Juan Xu & Jin-Bao Jian Nonlinear Dynamics An International Journal of Nonlinear Dynamics and Chaos in Engineering

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Meromorphic functions sharing three values

Meromorphic functions sharing three values J. Math. Soc. Japan Vol. 56, No., 2004 Meromorphic functions sharing three values By Xiao-Min Li and Hong-Xun Yi* (Received Feb. 7, 2002) (Revised Aug. 5, 2002) Abstract. In this paper, we prove a result

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information

Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2

Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 J. M. Martínez 3 December 7, 1999. Revised June 13, 2007. 1 This research was supported

More information

Logical Connectives and Quantifiers

Logical Connectives and Quantifiers Chapter 1 Logical Connectives and Quantifiers 1.1 Logical Connectives 1.2 Quantifiers 1.3 Techniques of Proof: I 1.4 Techniques of Proof: II Theorem 1. Let f be a continuous function. If 1 f(x)dx 0, then

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm

On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm Int. Journal of Math. Analysis, Vol. 6, 2012, no. 28, 1367-1382 On Stability of Fuzzy Multi-Objective Constrained Optimization Problem Using a Trust-Region Algorithm Bothina El-Sobky Department of Mathematics,

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION

A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION 1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Newton-homotopy analysis method for nonlinear equations

Newton-homotopy analysis method for nonlinear equations Applied Mathematics and Computation 188 (2007) 1794 1800 www.elsevier.com/locate/amc Newton-homotopy analysis method for nonlinear equations S. Abbasbandy a, *, Y. Tan b, S.J. Liao b a Department of Mathematics,

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Convergence of a linear recursive sequence

Convergence of a linear recursive sequence int. j. math. educ. sci. technol., 2004 vol. 35, no. 1, 51 63 Convergence of a linear recursive sequence E. G. TAY*, T. L. TOH, F. M. DONG and T. Y. LEE Mathematics and Mathematics Education, National

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Improved Lebesgue constants on the triangle

Improved Lebesgue constants on the triangle Journal of Computational Physics 207 (2005) 625 638 www.elsevier.com/locate/jcp Improved Lebesgue constants on the triangle Wilhelm Heinrichs Universität Duisburg-Essen, Ingenieurmathematik (FB 10), Universitätsstrasse

More information

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t.

1. Introduction. In this paper we discuss an algorithm for equality constrained optimization problems of the form. f(x) s.t. AN INEXACT SQP METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION RICHARD H. BYRD, FRANK E. CURTIS, AND JORGE NOCEDAL Abstract. We present an algorithm for large-scale equality constrained optimization. The

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Steering Exact Penalty Methods for Nonlinear Programming

Steering Exact Penalty Methods for Nonlinear Programming Steering Exact Penalty Methods for Nonlinear Programming Richard H. Byrd Jorge Nocedal Richard A. Waltz April 10, 2007 Technical Report Optimization Technology Center Northwestern University Evanston,

More information

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian

More information

On the use of piecewise linear models in nonlinear programming

On the use of piecewise linear models in nonlinear programming Math. Program., Ser. A (2013) 137:289 324 DOI 10.1007/s10107-011-0492-9 FULL LENGTH PAPER On the use of piecewise linear models in nonlinear programming Richard H. Byrd Jorge Nocedal Richard A. Waltz Yuchen

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES

A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES Lei-Hong Zhang, Ping-Qi Pan Department of Mathematics, Southeast University, Nanjing, 210096, P.R.China. Abstract This note, attempts to further Pan s

More information

INTERIOR POINT ALGORITHMS FOR NONLINEAR CONSTRAINED LEAST SQUARES PROBLEMS

INTERIOR POINT ALGORITHMS FOR NONLINEAR CONSTRAINED LEAST SQUARES PROBLEMS Inverse Problems in Engng., 00?, Vol. 00, No. 0, pp. 1 13 INTERIOR POINT ALGORITHMS FOR NONLINEAR CONSTRAINED LEAST SQUARES PROBLEMS JOSE HERSKOVITS a, *, VERANISE DUBEUX a, CRISTOVA O M. MOTA SOARES b

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information