Kunisch [4] proved that the optimality system is slantly dierentiable [7] and the primal-dual active set strategy is a specic semismooth Newton method

Size: px
Start display at page:

Download "Kunisch [4] proved that the optimality system is slantly dierentiable [7] and the primal-dual active set strategy is a specic semismooth Newton method"

Transcription

1 Finite Dierence Smoothing Solution of Nonsmooth Constrained Optimal Control Problems Xiaojun Chen 22 April 24 Abstract The nite dierence method and smoothing approximation for a nonsmooth constrained optimal control problem are considered. Convergence of solutions of discretized smoothing optimal control problems is proved. Error estimates of nite dierence smoothing solution are given. Numerical examples are used to test a smoothing SQP method for solving the nonsmooth constrained optimal control problem. Key words. Optimal control, nondierentiability, nite dierence method, smoothing approximation AMS subject classications. 49K2, 35J25 Introduction Recently signicant progress has been made in studying elliptic optimal control problem Z minimize ; z d ) 2 (y 2 dx + Z (u ; u d ) 2 dx 2 subject to ;4y = f(x y u) in y = on ; (.) u 2U where z d u d 2 L 2 () f 2 C( R 2 ), >isa constant, is an open, bounded convex subset of R N N 3 with smooth boundary ;, and U = fu 2 L 2 () j u(x) q(x) a:e in g q 2 L (): If f is linear with respect to the second and third variables, (.) is equivalentto its rst order optimality system. Based on the equivalence, the primal-dual active set strategy [2] can solve problem (.) eciently. Moreover, Hintermuller, Ito and Department of Mathematical System Science, Hirosaki University, Hirosaki , Japan. This work was supported in part by a Grant-in-Aid for Scientic Research from the Ministry of Education, Science, Sports and Culture of Japan.

2 Kunisch [4] proved that the optimality system is slantly dierentiable [7] and the primal-dual active set strategy is a specic semismooth Newton method for the rst order optimality system. The nondierentiable term in the optimality system arises from the constraint u 2 U. Without this constraint, the optimality system is a linear system. For such a case, Borzi, Kunisch and Kwak [3] gave convergence properties of the nite dierence multigrid solution for the optimality system. For the case where f is nonlinear with respect to the second and third variables, the rst order optimality system and second order optimality system have been studied in order to obtain error estimates in discretization approximations and convergence theory in sequential quadratic programming algorithms. See for instance [4] and the references therein. Most of papers assume that f is of class C 2 with respect to the second and third variables. However, this assumption does not hold for the following optimal control problem Z Z minimize ; z d ) 2 (y 2 d! + (u ; u d ) 2 d! 2 subject to ;4y + max(y ) = u + g in y = on ; (.2) u 2U where > is a constant andg 2 C(). The nonsmooth elliptic equations can be found in equilibrium analysis of conned MHD(magnetohydrodynamics) plasmas [6, 7, 8], thin stretched membranes partially covered with water[5], or reactiondiusion problems []. The discretized nonsmooth constrained optimal control problems derived from a nite dierence approximation or a nite element approximation of (.2) with mass lump has the form: minimize 2 (Y ; Z d) T H(Y ; Z d )+ 2 (U ; U d) T M(U ; U d ) subject to AY + D max( Y)=NU + c (.3) U b: Here Z d c 2 R n, U d b 2 R m, H 2 R nn, M 2 R mm, A 2 R nn,d 2 R nn N 2 R nm and max( ) is understood componentwise. Moreover H M A D are symmetric positive denite matrices, and D =diag(d ::: d n ) is a diagonal matrix. It is aware that for the discretized nonsmooth constrained optimal control problem, a solution of the rst order optimality system is not necessarily a solution of the optimal control problem. Also a solution of the optimal control problem is not necessarily a solution of the rst order optimality system, see Example 2. in the 2

3 next section. In [5], a sucient condition for the two problems to be equivalent is given. In this paper, we consider the following smoothing approximation of (.3) minimize 2 (Y ; Z d) T H(Y ; Z d )+ 2 (U ; U d) T M(U ; U d ) subject to AY + (Y )=NU + c (.4) U b where : R n! R n is dened by a smoothing function : R! R as (Y )=D (Y ) (Y 2 ). (Y n ) Here is called a smoothing parameter. For >, is continuously dierentiable and its derivative satises. Moreover, it holds C A j (t) ; max( t)j with a constant > for all. We can nd many smoothing functions having such properties. In this paper, we simply choose [2] (t) = 2 (t + p t ): It is not dicult to verify that for a xed >, is continuously dierentiable and Moreover, for any t and >, and (t) = 2 ( + t p t2 + 2 ) > : j (t) ; max( t)j 2 : o (t) := lim # (t) = 2 8 >< >: : 2 if t> if t = if t<: It was shown in [8] that o (t) is an element of the subgradient [9] of max( ). Following these properities of, the smoothing approximation function is continuously dierentiable for > and satises k (Y ) ; D max( Y)k p nkdk (.5) 3

4 for. Here kk denotes the Euclidean norm. In particular, for =, (Y )=D max( Y). Moreover, the matrix o (Y ):=Ddiag( o (Y ) o (Y 2 ) ::: o (Y n )) is an element ofthe Clarke generalized Jacobian [9] of D max( Y): In this paper we investigate the convergence of the discretized smoothing problem (.4) derived from the ve-point dierence method and smoothing approximation. In section 2, we describe the nite dierence discretization of the optimal control problem. We prove that a sequence of optimal solutions of the discretized smoothing problem (.4) converges to a solution of the discretized nonsmooth optimal control problem (.3) as!. Moreover, under certain conditions, we prove that the distance between the two solution sets of (.3) and (.4) is O(). We show that the dierence between the optimal objective value of (.3) and the optimal objective value of the continuous problem (.2) is O(h ), where < <. In section 3we use numerical examples to show that the nonsmooth optimal control problem can be solved by a nite dierence smoothing SQP method eciently. 2 Convergence of smoothing discretized problem In this section, we investigate convergence of the smoothing discretized optimal control problem (.4) derived from the smoothing approximation and the vepoint nite dierence method with = ( ) ( ). To simplify our discussion, we use the same mesh size for discretization approximation of y and u. Let be a positive integer. Set h ==( + ). Denote the set of grids by h = f(ih jh) j i j = ::: g: The Dirichlet problem in the constraints of (.2) is approximated by the ve-point nite dierence approximation with uniform mesh size h. For grid functions W and V dened on, we denote the discrete L 2 h-scalar product X (W V) L 2 = h 2 W (ih jh)v (ih jh): h i j= Let P be the restriction operator from L 2 () to L 2 h(). Let Z d = Pz d, U d = Pu d and b = Pq. Then we obtain the discretized optimal control problem (.3). Denote the objective functions of the continuous problem and the discretized problems J(y u) = 2 Z (y ; z d ) 2 dx Z (u ; u d ) 2 dx

5 and J h (Y U) = 2 (Y ; Z d) T H(Y ; Z d )+ 2 (U ; U d) T M(U ; U d ) respectively. Let S S h and S h denote the solution sets of (.2), (.3), and (.4), respectively. 2. Problems (.4) and (.3) For a xed h, we investigate the limiting behaviour of optimal solutions of (.4) as the smoothing parameter! : Theorem 2. For any mesh size h and smoothing parameter, the solution sets S h and S h are nonempty and bounded. Moreover, there exists a constant > such that for all 2 [ ]. S h f(y U) j J h (Y U) g (2.) Proof: First we observe that for xed U b and, the system of equations AY + (Y )=NU + c is equivalent to the strongly convex unconstrained minimization problem min Y 2 Y T AY + nx i= Z Yi (t)dt ; Y T (NU + c): By the strong convexity, the problem has a unique solution Y. Hence the feasible sets of (.3) and (.4) are nonempty. Moreover, we notice that the objective function J h of (.3) and (.4) is strongly convex, which implies that the solution sets of (.3) and (.4) are nonempty and bounded. Now we prove (2.). In (.3), the constraint AY + D max( Y)=NU + c can be written as (A + DE(Y ))Y = NU + c where E(Y )is a diagonal matrix whose diagonal elements are E ii (Y )= 8 < : if Y i > if Y i : 5

6 Since A is an M-matrix, from the structure of DE(Y ), we have thata + DE(Y ) is also an M-matrix and ky kk(a + DE(Y )) ; (NU + c)k ka ; k(knkkuk + kck): (2.2) See Theorem 2.4. in [6]. Moreover, for >, let Y satisfy By the mean value theorem, we nd where and AY + (Y )=NU + c: = A(Y ; Y )+ (Y ) ; D max( Y) = A(Y ; Y )+ (Y ) ; (Y )+ (Y ) ; D max( Y) = (A + 2 ~ D)(Y ; Y )+ 2 ^D ~Y ~Y ~D = Ddiag( + q ::: + q n ) Y ~ Y ~ n ^D = Ddiag( Here ^ 2 ( ]and ~Y i lies between (Y ) i and Y i. ^ ^ q ::: q ): Y 2 +^ 2 Y 2 n +^ 2 Using that all diagonal elements of ~D are positive and < ^ q i = ::: n Yi 2 +^ 2 we obtain ky ; Y k 2 k(a + D) 2 ~ ; ^Dk 2 ka; kkdk: (2.3) Therefore, using (2.2), we nd that for 2 ( ], ky k 2 ka; kkdk + ky kka ; k( kdk + knkkuk + kck): 2 Let (Y U ) 2S h and (Y U ) 2S h. Let Y be the solution of AY + (Y )=NU + c 6

7 that is, (Y U )isa feasible point of (.4). By the argument above, we have J h (Y U ) J h (Y U ) 2 khkky ; Z d k kmkku ; U d k 2 2 khk(ky k + kz d k) kmk(ku k + ku d k) 2 2 khk(ka; k( 2 kdk + knkku k + kck)+kz d k) kmk(ku k + ku d k) 2 : In the rst part of the proof, we have shown that the solution set S h is bounded, that is, ku k is smaller than a positive constant. Hence there exists a constant > such that 2 khk(ka; k( 2 kdk + knkku k + kck)+kz d k) kmk(ku k + ku d k) 2 : This completes the proof. Theorem 2.2 Letting!, any accumulation point of a sequence of optimal solutions of (.4) is an optimal solution of (.3), that is, f lim (Y U)2S h # (Y U)g S h : Proof: Let (Y U ) 2S h, (Y U ) 2S h and (Y U ) satisfy Then the following inequality holds Using the Talyor expansion, we nd AY + (Y )=NU + c: J h (Y U ) J h (Y U ): (2.4) J h (Y U )=J h (Y U )+(Y ; Y ) T H(Y ; Z d )+ 2 (Y ; Y ) T H(Y ; Y ): Following the argument on(2.3), we have ky ; Y k 2 ka; kkdk: Since the solution set of (.3) is bounded, there is a constant > such that for 2=(kA ; kkdk), (Y ; Y ) T H(Y ; Z d )+ 2 (Y ; Y ) T H(Y ; Y ) (ky ; Y k + ky ; Y k 2 ) ka ; kkdk: 7

8 Combining this with (2.4), the Talyor expansion gives J h (Y U ) J h (Y U )+ka ; kkdk: (2.5) Moreover, from Theorem 2., there is a bounded closed set L such that S h L for all 2 [ ]: Hence, without loss of generality, we may assume that (Y U )! ( Y U) 2L as! : Now, we show that ( Y U) is a feasible point of (.3). Obviously U b as U for all >. The other constraint also holds, since b ka Y + D max( Y ) ; N U ; ck = lim kay + D max( Y ) ; N U ; ck # = lim # knu ; (Y )+D max( Y ) ; N Uk lim # kd max( Y ) ; (Y )k lim # (kdkk max( Y ) ; max( Y )k + kd max( Y ) ; (Y )k) kdk(lim # kv kk Y ; Y k + p n) =: Here V is a diagonal matrix whose diagonal elements are V ii = 8 >< >: Obviously, we have V ii : Now let! in (2.5), we get Hence ( Y U) isasolution of (.3). if ( Y ; Y ) i = max( Y ) i ; max( Y ) i ( Y ; Y otherwise ) i J h ( Y U) J h (Y U ): To estimate the distance between the two solution sets S h and S h we have to consider the rst order optimality system for (.3). We say(y U) satises the rst order conditions of (.3), or(y U) is a KKT (Karush-Kuhn-Tucker) point of (.3), if it together with some (s t) 2 R n R m satises H(Y ; Z d )+As + DE(Y )s M(U ; U d ) ; N T s + t AY + D max( Y) ; NU ; c min(t b ; U) 8 C A =: (2.6)

9 The vectors s 2 R n and t 2 R m are referred to as Lagrange multipliers. It was shown in [5] that for any U, the system of nonsmooth equations AY + D max( Y) ; NU ; c = has a unique solution, and it denes a solution function Y (U). Moreover, (2.6) is equivalent to the following ((A + DE(Y (U))); N) T H(Y (U) ; Z d )+M(U ; U d )+t min(t b ; U) A =: (2.7) However, for the discretized nonsmooth constrained optimal control problem (.3), a KKT point of (.3) is not necessarily a solution of the optimal control problem (.3). Also a solution of the optimal control problem (.3) is not necessarily a KKT point of (.3). Example 2. Let n =2 m= M = = = b = H = D = I c = A 2 ; ; 2 A and N 3 ;. For U d = and Z d =( ;3) T,(~ Y ~U) =( ) T is a KKT point of (.3), but ( ~ Y ~ U)isnota solution of (.3). 2. For U d = and Z d = ( ), (Y U ) = ( ) T is a solution of (.3), but the set (Y U )isnota KKT point of (.3). Let A K(Y ) be the submatrix of A whose entries lie in the rows of A indexed by Lemma 2. K(Y )=fi j Y i = i = 2 ::: ng: [5] Suppose that (Y U ) is a local optimal solution of (.3), and either K(Y )= or ((A + DE(Y )) ; N) K(Y ) =, then (Y U ) is a KKT point of (.3). A : Theorem 2.3 Let (Y U ) 2 S h Lemma 2., we have and (Y U ) 2 S h : Under assumptions of ky ; Y k + ku ; U ko(): Proof: Let us set W =(A + DE(Y )) ; N: 9

10 By Theorem 2. in [5], the assumptions implies that in a neighborhood of U, the solution function Y () can be expressed by Y (U) =WU: Moreover, Y () is dierentiable at U and Y (U ) = W: In such a neighborhood, we dene a function F (U t) W T H(Y (U) ; Z d )+M(U ; U d )+t min(t b ; U) From Lemma 2., there is t 2 R m such that F (U t )=: The Clarke generalized t) [9] of F at (U t ) is the set of matrices that have the W T HW + M T I I + T where T is a diagonal matrix whose diagonal elements are T ii = 8 >< >: ; if (b ; U ) i <t i if (b ; U ) i >t i i if (b ; U ) i = t i i 2 [; ]: This is easy to see that all matrices t ) are nonsingular. By Proposition 3. in [7], there is a neighborhood N of (U t ) and a constant > such that for any (U t) 2N and any V t), V is nonsingular and kv ; k: Now we consider a function of F dened by the rst order condition of the smoothing problem (.4) as F (U t) ((A + (Y (U))) ; N) T H(Y (U) ; Z d )+M(U ; U d )+t min(t b ; U) Here Y (U) is the unique solution of the system of smoothing equations A AY + (Y ) ; NU ; c =: Since (.4) is a smoothing problem, (Y U ) 2S h implies that there is t 2 R m such that F (U t )=: A : A :

11 have Applying the mean value theorem for Lipschitz continuous functions in [9], we F (U t ) ; F (U t )=co@f(u U t t U ; U t ; t where co@f(u U t t ) denotes the convex hull of all matrices V for Z in the line segment bewteen (U t ) and (U t ): Therefore, we can claim that Note U ; U t ; t A A 2kF (U t ) ; F (U t )k: (2.8) lim # (Y )= o (Y ) and o (Y ) max( Y ): By Lemma 2.2 in [5] (A + o (Y )) ; N = W: From the nonsingularity U ), (Y U ) is the unique solution of (.3). From Theorem 2.2, Y! Y as!. Hence there are constants >, 2 > and > such that for all 2 ( ], and for i 62 K(Y ), i 62 K(Y ) and kh(y j o (Y ) ; (Y )j i = 2 ; Z d )k Therefore, from F (U t ) = and F (U t )=,we nd kf (U t ) ; F (U t )k = kf (U t ) ; F (U t )k q (Y ) 2 i + 2 ;jy j i q 2 : (2.9) (Y ) 2 i + 2 = k(((a + o (Y )) ; ; (A + (Y )) ; )N) T H(Y ; Z d )k k((a + o (Y )) ; ; (A + (Y )) ; )Nk k(a + (Y )) ; ( (Y ) ; o (Y ))(A + o (Y )) ; Nk ka ; kk( (Y ) ; o (Y ))(A + o (Y )) ; Nk 2 p nka ; k 2 knk: The last inequality uses k(a + o (Y )) ; kka ; k (2.9) and ((A + o (Y )) ; N) K(Y ) =:

12 This, together with (2.8), gives ku ; U ko(): Furthermore, from the convergence of Y suciently small, and the assumptions, we have that for This completes the proof. 2.2 Problems (.3) and (.2) ky ; Y k = kw (U ; U )k O(): Note that L 2 h()-scalar product (Y Y ) L 2 associated with Y = Py can be considered as the Riemann sum for the multidimensional integral y 2 dx: By the h Z error bound (5.5.5) in [], we have where Z j y 2 dx ; (Y Y ) L 2 h V (y 2 )= max x z2 h x6=z j 2V (y2 ) p n jy 2 (x) ; y 2 (z)j : kx ; zk If y is Lipschitz continuous in with a Lipschitz constant K, then there is such that max x2 jy(x)j and jy 2 (x) ; y 2 (z)j = jy(x)+y(z)jjy(x) ; y(z)j 2Kkx ; zk: Hence the Lipschitzan continuity of f yields an error bound for the Riemann sum Z j y 2 dx ; (Y Y ) L 2 j 4K p = O(h): (2.) h n For a given function u, error bounds for the ve-point nite dierence method to solve the nonsmooth Dirichlet problem can be found in [6]. Lemma 2.2 ;4y + max( y)=u + g in y =on ;: (2.) [6] Let y 2 C 2 () be a solution of (2.), and let Y be the nite dierence solution of (2.). Then we have A(Py)+D max( Py)=Nu+ c + O(h ) and kpy; Y ko(h ): 2

13 Here stands for the exponent of Holder-continuity, and <<. Theorem 2.4 Suppose that (.2) has a Lipschitz continuous solution (y u ) and y 2 C 2. Let (Y U ) be a solution of (.3). Then we have J h (Y U ) J h (Py Pu )+O(h ): (2.2) Moreover, if there exists u 2 C, together with y 2 C 2, satises the constraints of (.2) and kp u ; U ko(h ) then we have J h (Y U ) J h (Py Pu ) ; O(h ): (2.3) Proof: By Lemma 2.2, the truncation error of the nite dierence method yields A(Py )+D max( Py )=N(Pu )+c + O(h ): (2.4) We enlarge the feasible set of (.3) and consider a relaxing problem minimize 2 (Y ; Z d) T H(Y ; Z d )+ 2 (U ; U d) T M(U ; U d ) subject to AY + D max( Y)=NU + c (2.5) U b + h e where e =( ::: ) T 2 R n, and is a positive constant such that (Py Pu )is a feasible point of(2.5). Let ( ~Y ~U) be a solution of (2.5). Then it holds J h ( ~Y ~U) J h (Py Pu ): (2.6) Moreover, since the feasible set of (.3) is contained in that of (2.5), we have J h ( ~Y ~U) J h (Y U ): Take a point ^U = min(b ~U), together with ^Y satisfying A ^Y + D max( ^Y )=N ^U + c: Then ( ^Y ^U) is a feasible point of (.3). Moreover, from ~U b + h e, we have k ^U ; ~Uk O(h ) and k ^Y ; ~Y kka ; kknkk ^U ; ~Uk O(h ): 3

14 Therefore, we nd J h (Y U ) J h ( ^Y ^U) J h ( ~Y ~U)+O(h ): This, together with (2.6), implies J h (Y U ) J h (Py Pu )+O(h ): To prove (2.3), we let Y be the nite dierence solution of the Diriclet problem ;4y + max( y)=u + g in y =on ;: (2.7) By Lemma 2.2, we have kp y ; Y ko(h ): (2.8) Moreover, from and A Y + D max( Y )=N(P u)+c AY + D max( Y )=NU + c we nd k Y ; Y k = k(a + V ) ; N(P u ; U )kka ; kknkkp u ; U ko(h ): Here V is a nonnegative diagonal matrix. (See the proof of Theorem 2.2.) This, together with (2.8), implies that kp y ; Y ko(h ): Therefore, we obtain J h (P y P u) J h (Y U )+O(h ): (2.9) From the assumption that y u y and u are Lipschitz continuous functions, we can estimate the errors of the integrals in J and get J h (Py Pu ) ; O(h) J(y u ) (2.2) and J(y u) J h (P y P u)+o(h): (2.2) Finally, using the optimality of(y u ), that is, J(y u ) J(y u) we obtain (2.3) from (2.9), (2.2),(2.2). From Theorem 2.3 and Theorem 2.4, we nd a nice relation between the solution (y u )ofthenonsmooth optimal control problem (.2) and the solution (Y U ) of the nite dierence smoothing approximation (.4) as follows: kj h (Y U ) ; J h (Py Pu )ko(h )+O(): 4

15 3 Numerical Examples Convergence analysis and error estimates in Section 2 suggest that the discretized smoothing constrained optimal control problem (.4) is a good approximation of the nonsmooth optimal control problem (.2). In this section, we propose a smoothing SQP (sequential quadratic programming ) method for solving (.4) and report numerical results. Examples are generated by adding the nonsmooth term max( y) to examples in [2]. Several tests for dierentvalues of were performed. The tests were carried out on a IBM workstation using Matlab. Smoothing SQP method(ssqp) Choose parameters >, > and a feasible point (Y U ) of (.3). For k we solve the quadratic program minimize subject to 2 (Y ; Z d) T H(Y ; Z d )+ 2 (U ; U d) T M(U ; U d ) AY + (Y k )+ (Y k )(Y ; Y k )=NU + c U b and let the optimal solution be (Y k+ U k+ ): We stop the iteration when jj h (Y k+ U k+ ) ; J h (Y k U k )j: The SSQP method is a standard SQP method for solving the smoothing optimization problem (.4). Convergence analysis can be found in []. Furthermore, the quadratic program at each step can be solved by an optimization toolbox, for example, quadprog in MATLAB. In the numerical test, we chose = ( ) ( ), n = m = 2 = ;6, = ;8, g =, and (Y U ) = ( ::: ) T. In Tables -3, k is the number of iterations, L(Y k U k )=k min(;((a+de(y k )) ; N) T H(Y k ;Z d );M(U k ;U d ) b;u k )k jj k h ; J k; h j = jj h (Y k U k ) ; J h (Y k; U k; )j and r = kay k + D max( Y k ) ; NU k ; ck : Example 3. Let q(x) and z d (x) = 6 exp(2x ) sin(2x ) sin(2x 2 ): 5

16 Table : Example 3.(a) u d, = ;2 k L(Y k U k ) J h (Y k U k ) jj k h ; J k; h j r e-9.49.e-2 8.9e e e- 3.6e e e- 7.e e e-2.4e e e- 2.6e e e- 3.2e- Table 2: Example 3.(b) u d, = ;6 k L(Y k U k ) J h (Y k U k ) jj k h ; J k; h j r e e-3 6.6e e e- 2.7e e e-5 5.3e e e-.e e e-5 2.e e e-2 4.e-2 Table 3: Example 3.2 = ;6 k L(Y k U k ) J h (Y k U k ) jj k h ; J k; h j r.2 4.5e e- 3.4e e e-6 5.8e e e-6 9.9e e e-6.5e e e-7.3e e e-6.2e- 6

17 Example 3.2 Let q(x), u d, =: ;6 and z d (x) = 8 < : 2x x 2 (x ; 2 )2 ( ; x 2 ) if <x =2 2x 2 (x ; )(x ; 2 )2 ( ; x 2 ) if =2 <x Acknowledgements. The author is very grateful to Prof. Tetsuro Yamamoto of Waseda University for his helpful comments on the nite dierence method. References [] A.K. Aziz, A.B. Stephens and Manil Suri, Numerical methods for reactiondiusion problems with non-dierentiable kinetics, Numer. Math., 5 (988) -. [2] M. Bergounioux, K. Ito, K. Kunisch, Primal-dual strategy for constrained optimal control problems, SIAM J. Control Optim., 37 (999) [3] A. Borzi, K. Kunisch and D.Y. Kwak, Accuracy and convergence properties of the nite dierence multigrid solution of an optimal control optimality system, SIAM J. Control Optim. 4(23) [4] E. Casas and M. Mateos, Second order optimality conditions for semilinear elliptic control problems with nitely many state constraints, SIAM J. Control Optim., 4 (22) [5] X. Chen, First order conditions for discretized nonsmooth constrained optimal control problems, SIAM J. Control Optim., 42(24) [6] X. Chen, N. Matsunaga and T. Yamamoto, Smoothing Newton methods for nonsmooth Dirichlet problems, Reformulation - Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, M. Fukushima and L. Qi, eds., (Kluwer Academic Publisher, Dordrecht, The Netherlands, 999) [7] X. Chen, Z. Nashed and L. Qi, Smoothing methods and semismooth methods for nondierentiable operator equations, SIAM J. Numer. Anal., 38 (2) [8] X. Chen, L. Qi and D. Sun, Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities, Math. Comp., 67 (998),

18 [9] F.H.Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, 983. [] P. J. Davis and P. Rabinowitz, Methods of Numerical Integration, Academic Press, Inc., 975. [] R. Fletcher, Practical Methods of Optimization, Second Edition, John Wiley & Sons Ltd, 987. [2] C. Kanzow, Some noninterior continuation methods for linear complementarity problems, SIAM J. Matrix Anal. Appl. 7(996) [3] H. Niederreiter: Random Number Generation and Quasi-Monte Carlo Methods, SIAM, Philadelphia, 992. [4] M. Hintermuller, K. Ito and K. Kunisch, The primal-dual active set strategy as a semismooth Newton method, SIAM J. Optim., 3 (23), [5] F. Kikuchi, K. Nakazato and T. Ushijima, Finite element approximation of a nonlinear eigenvalue problem related to MHD equilibria, Japan J.Appl. Math., (984) [6] J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 97. [7] L.Qi and J.Sun, A nonsmooth version of Newton's method, Math. Programming, 58(993) [8] J. Rappaz, Approximation of a nondierentiable nonlinear problem related to MHD equilibria, Numer. Math., 45 (984) Appendix: Proof of Example 2. The solution function Y () can be given explicitly as Y (U) = 8 5=3 =3 A U if U A U if U < :. Since ~Y = Y ( ~U) =( ) and E(Y ( ~U)) is a zero matrix, we have (A ; N) T H( ~Y ; Z d )+M( ~U ; U d )+t = 3 (5 3 A ; +t = 8

19 with t =, and min(t b ; ~U) =: Hence ( ~U ~U) is a KKT point of(.3). However, J h ( ~Y ~U) = 2 ( 3 A + 2 =5 and J h ( Y U)= 2 ( A + 2 ( 2 ; )2 = 39 8 <J h( ~ Y ~ U) where U ==2 and Y =(=2 ) T. Hence ( Y ~ U)isnota ~ solution of (.3). 2. For U, J h (Y (U) U)= 2 (U 2 +)+ 2 U 2 : For U <, J h (Y U) = 2 (25 9 U 2 +( U 3 ; )2 )+ 2 U 2 : Hence (Y U )=isthe solution of (.3). However (A ; N) T (Y ; Z d )+t = 3 (5 ; A + t = implies t ==3 and min(t b ; U ) = min(=3 ) = =3 that is, (Y U ) is not a KKT point. 9

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au

More information

t x 0.25

t x 0.25 Journal of ELECTRICAL ENGINEERING, VOL. 52, NO. /s, 2, 48{52 COMPARISON OF BROYDEN AND NEWTON METHODS FOR SOLVING NONLINEAR PARABOLIC EQUATIONS Ivan Cimrak If the time discretization of a nonlinear parabolic

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

On the Convergence of Newton Iterations to Non-Stationary Points Richard H. Byrd Marcelo Marazzi y Jorge Nocedal z April 23, 2001 Report OTC 2001/01 Optimization Technology Center Northwestern University,

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION: THEORETICAL INVESTIGATION AND NUMERICAL RESULTS 1 Bintong Chen 2, Xiaojun Chen 3 and Christian Kanzow 4 2 Department of Management and Systems Washington State

More information

An improved generalized Newton method for absolute value equations

An improved generalized Newton method for absolute value equations DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School

More information

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise 1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Abstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li

Abstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li Inexact Newton Methods for Solving Nonsmooth Equations Jose Mario Martnez Department of Applied Mathematics IMECC-UNICAMP State University of Campinas CP 6065, 13081 Campinas SP, Brazil Email : martinez@ccvax.unicamp.ansp.br

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation

More information

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Nonlinear Analysis 46 (001) 853 868 www.elsevier.com/locate/na Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Yunbin Zhao a;, Defeng Sun

More information

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 F. FACCHINEI 2 AND S. LUCIDI 3 Communicated by L.C.W. Dixon 1 This research was

More information

An Introduction to Correlation Stress Testing

An Introduction to Correlation Stress Testing An Introduction to Correlation Stress Testing Defeng Sun Department of Mathematics and Risk Management Institute National University of Singapore This is based on a joint work with GAO Yan at NUS March

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

School of Mathematics, the University of New South Wales, Sydney 2052, Australia

School of Mathematics, the University of New South Wales, Sydney 2052, Australia Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,

More information

1.The anisotropic plate model

1.The anisotropic plate model Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems. Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods. Contemporary Mathematics Volume 00, 0000 Domain Decomposition Methods for Monotone Nonlinear Elliptic Problems XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA Abstract. In this paper, we study several overlapping

More information

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang Pointwise convergence rate for nonlinear conservation laws Eitan Tadmor and Tao Tang Abstract. We introduce a new method to obtain pointwise error estimates for vanishing viscosity and nite dierence approximations

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method

More information

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Realization of set functions as cut functions of graphs and hypergraphs

Realization of set functions as cut functions of graphs and hypergraphs Discrete Mathematics 226 (2001) 199 210 www.elsevier.com/locate/disc Realization of set functions as cut functions of graphs and hypergraphs Satoru Fujishige a;, Sachin B. Patkar b a Division of Systems

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY PREPRINT ANL/MCS-P6-1196, NOVEMBER 1996 (REVISED AUGUST 1998) MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY SUPERLINEAR CONVERGENCE OF AN INTERIOR-POINT METHOD DESPITE DEPENDENT

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

The iterative convex minorant algorithm for nonparametric estimation

The iterative convex minorant algorithm for nonparametric estimation The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

2 W. LAWTON, S. L. LEE AND ZUOWEI SHEN is called the fundamental condition, and a sequence which satises the fundamental condition will be called a fu

2 W. LAWTON, S. L. LEE AND ZUOWEI SHEN is called the fundamental condition, and a sequence which satises the fundamental condition will be called a fu CONVERGENCE OF MULTIDIMENSIONAL CASCADE ALGORITHM W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. Necessary and sucient conditions on the spectrum of the restricted transition operators are given for the

More information

Werner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying

Werner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying Stability of solutions to chance constrained stochastic programs Rene Henrion Weierstrass Institute for Applied Analysis and Stochastics D-7 Berlin, Germany and Werner Romisch Humboldt University Berlin

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

Research Collection. On the definition of nonlinear stability for numerical methods. Working Paper. ETH Library. Author(s): Pirovino, Magnus

Research Collection. On the definition of nonlinear stability for numerical methods. Working Paper. ETH Library. Author(s): Pirovino, Magnus Research Collection Working Paper On the definition of nonlinear stability for numerical methods Author(s): Pirovino, Magnus Publication Date: 1991 Permanent Link: https://doi.org/10.3929/ethz-a-000622087

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ???? DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0

More information

QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS *

QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * WEI-CHANG SHANN AND JANN-CHANG YAN Abstract. Scaling equations are used to derive formulae of quadratures involving polynomials and scaling/wavelet

More information

YURI LEVIN AND ADI BEN-ISRAEL

YURI LEVIN AND ADI BEN-ISRAEL Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

A Posteriori Estimates for Cost Functionals of Optimal Control Problems

A Posteriori Estimates for Cost Functionals of Optimal Control Problems A Posteriori Estimates for Cost Functionals of Optimal Control Problems Alexandra Gaevskaya, Ronald H.W. Hoppe,2 and Sergey Repin 3 Institute of Mathematics, Universität Augsburg, D-8659 Augsburg, Germany

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

The effect of calmness on the solution set of systems of nonlinear equations

The effect of calmness on the solution set of systems of nonlinear equations Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:

More information

Constrained Controllability of Nonlinear Systems

Constrained Controllability of Nonlinear Systems Ž. JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 01, 365 374 1996 ARTICLE NO. 060 Constrained Controllability of Nonlinear Systems Jerzy Klamka* Institute of Automation, Technical Uni ersity, ul. Akademicka

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o

2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o SIAM J. Optimization Vol. x, No. x, pp. x{xx, xxx 19xx 000 AN SQP ALGORITHM FOR FINELY DISCRETIZED CONTINUOUS MINIMAX PROBLEMS AND OTHER MINIMAX PROBLEMS WITH MANY OBJECTIVE FUNCTIONS* JIAN L. ZHOUy AND

More information

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b. Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

National Institute of Standards and Technology USA. Jon W. Tolle. Departments of Mathematics and Operations Research. University of North Carolina USA

National Institute of Standards and Technology USA. Jon W. Tolle. Departments of Mathematics and Operations Research. University of North Carolina USA Acta Numerica (1996), pp. 1{000 Sequential Quadratic Programming Paul T. Boggs Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, Maryland 20899

More information

quantitative information on the error caused by using the solution of the linear problem to describe the response of the elastic material on a corner

quantitative information on the error caused by using the solution of the linear problem to describe the response of the elastic material on a corner Quantitative Justication of Linearization in Nonlinear Hencky Material Problems 1 Weimin Han and Hong-ci Huang 3 Abstract. The classical linear elasticity theory is based on the assumption that the size

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization

An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization Applied Mathematical Modelling 31 (2007) 1201 1212 www.elsevier.com/locate/apm An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization Zhibin Zhu *

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts and Stephan Matthai Mathematics Research Report No. MRR 003{96, Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL

More information

Part 1: Overview of Ordinary Dierential Equations 1 Chapter 1 Basic Concepts and Problems 1.1 Problems Leading to Ordinary Dierential Equations Many scientic and engineering problems are modeled by systems

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information