NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm
|
|
- Abigayle McGee
- 5 years ago
- Views:
Transcription
1 - (23) NLP - NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS The Goldstein-Levitin-Polya algorithm We consider an algorithm for solving the otimization roblem under convex constraints. Although the convexity of the constraints is treated in its generality, in ractice, convex constraints imly linear inequalities, esecially uer and lower bounds on the decision variables. Aart from convexity, the main assumtion on the constraints is that they are once differentiable. The method belongs to the owerful class of algorithms develoed by Goldstein- Levitin-Polya. It requires that the constraints are satisfied at every iteration. This is an inconvenience in general. Yet, for linear constraints and for secial constraint structures, it is simle to maintain feasibility. Two imortant advantage that seems to be offered in return for satisfying the constraints are the strong results concerning unit stesize achievement and suerlinear convergence. With regard to the latter, it is established that the necessary and sufficient condition for Q-suerlinear rate of convergence is the two sided rojected Hessian condition which, in other algorithms, can only ensure lesser Q-suerlinear rates..the PROBLEM Consider min š Y Ðx Ñ ¹ h (x) Ÿ, (.) n n i Y : Ä, the elements of h: Ä are differentiable functions. The feasible region n e = š x ¹ h (x) Ÿ (.2) is assumed to be convex and Y (x) is assumed to be bounded below on e. The convexity of the constraints allows the develoment of a secial class algorithm. In articular, we use the fact that if two oints belong to the convex set described by the constraints, then any oint on the line segment joining the two oints is also in the convex set. The oint chosen by the algorithm on the line segment is the oint that corresonds to an imroved value of the objective function. The method we describe for solving (.) under this convexity assumtion is a generalisation of the Goldstein-Levitin-Polya (GLP) algorithm (Goldstein, 964; Levitin and Polya, 966). The basic GLP algorithm consists of the iterative scheme x + = PeŠ x 7 d where d is a descent direction such as the steeest descent direction, fy (x ). 7 is the stesize and P e is the rojection of Š x 7 d onto the feasible region e.
2 - (23) NLP 2-2. THE ALGORITHM Let the quadratic aroximation be defined by q (x) = f Y (x ), x - x + x - x, d (x - x ) (2.) where d is a symmetric ositive definite aroximation to the Hessian d of Y(x) at x. The assumtion that d is ositive definite is not restrictive since it does not imair the convergence roerties of the algorithm even when the true Hessian is not strictly ositive definite. It will be shown in Section 4 that the convergence of the stesizes, discussed below, to unity and that the Q-suerlinear convergence rate deends on the accuracy of the rojection of d onto the tangent manifold of the constraints. The algorithm is based on the rojections of the unconstrained ste - x - = x! d f Y (x ), =,, 2,... (2.2) with,! [, -], -!! [, 2), onto the feasible region R using subroblem min š l x - x - 2 l ¹ x e (2.3, a) where ² v ² = v, d v. We note that for the choice =, reduces to the solution of 2 d d min š q (x) ¹ x e. (2.3, b) The lectures are mainly concerned with the choice! = and hence (2.3, b) but the results below are stated for the general case for! [, -!]. If x is the value of x that solves (2.3, a or b), then x + is comuted using + x = x + 7 (x - x ) (2.4) with 7 [, ] given by the smallest value of j =,, 2, 3,..., 7 = ( ), (, ), satisfying + Y(x ) Y (x ) Ÿ 7 3 q (x ), 3 (, ). (2.5) with 3 an arbitrary number in the above range. An alternative to (2.5) is the Armijo-tye stesize strategy Y(x ) Y (x ) Ÿ 7 3 fy (x ), x x, 3 Š,. (2.6) with 3 2 an arbitrary number in the above range. The matrix d is aroximated using Powell's (978, b) modification to a quasi- Newton formula due to Broyden (969; 97), Fletcher (97), Goldfarb (97), Shanno! j!-
3 - (23) NLP 3 - (97) for aroximating the Hessian of a Lagrangian. This modified BFGS (Broyden- Fletcher-Goldfarb-Shanno) formula is given by where d = d d $ x $ x T d T / / +. (2.8,a) + $ T d $ x x $ T / = fy(x ) f Y (x ); $ = (x x ) (2.8,b) + x + x / = ) + ( )) d $ (2.8,c) x T T if $.2 $ d $ x x x ) =. (2.8,d).8 $ x T d $ x $ T T x d $ x $ x if $ T.2 $ T d $ x x x For linearly constrained e and ositive definite d, (2.3) is a ositive definite quadratic rogramming roblem with a unique solution x. In the initial stages of the algorithm,! rovides the otion for choosing large stes. As discussed in Sections 3 and 4 below,! does not affect the convergence roerties of the algorithm, rovided it is chosen the range [, -!], with the sequence Ö! converging to unity. An examle is! = 2 7. It is shown below that 7, a. Also, if 7 = for!, then reducing! + to unity maes it easier to maintain unit. 7 EXERCISE: Write in seudocode the two versions of the GLP algorithm discussed in this section. Do a library search for suitable termination rules for the algorithm and incororate them in the seudocode. 3. CONVERGENCE EXERCISE: Assuming e is a system of linear inequalities, establish the descent roerty of the direction generated by the algorithm. (The descent roerty discussed below is for general convex feasible sets, which clearly include systems of inequalities. There is a direct way of establishing descent for linear inequalities.) In this section the stesize strategies (2.5) and (2.6) are justified for symmetric ositive definite and d and {! } satisfying the restrictions discussed in Section 2. It is shown that the algorithm is globally convergent. The main theorem of this section is (3.) below which justifies (2.5)-(2.6) and establishes the existence of 7 (, ] which satisfies the stesize rules. Most subsequent results are deendent on Theorem 3.. Lemma 3. Let! (, ), _ d be a bounded symmetric ositive definite matrix and x be given by (2.2). Then, x solving (2.3) satisfies
4 - (23) NLP 4-2 l x - x l Ÿ -! fy (x ), x x. (3.) d REMARK: If you have already done the exercise at the beginning of this section, you now this result holds for linear inequality constraints (for! =, thence (2.3, b)). Hence, you can ignore the roof. Since (2.3) is the rojection of x - on the convex region e, we have the inequality x -, d (x - x) - Ÿ (3.2) for any e (see Rustem, 998; Lemma 3..2). Thus, we have - 2 lx x l x x, Š x + Ÿ d! d fy (x ) x! fy (x ), x x. d The result follows as the first term on the right may be formulated as (3.2), for = x. Lemma 3.2 Let 7 [, ], d be symmetric ositive definite and x be given by (2.4). Then + q (x +) Ÿ 7 q (x ) (3.3) 2 q(x ) (x), x - x + x - x, + œ 7 fy 7 d (x - x) Ÿ 7 q(x). Lemma 3.3 Let, (, -], - (, 2),! -!!! d be bounded symmetric ositive definite and x be given by (2.2). Then, x solving (2.3) satisfies q (x ) Ÿ Š lx - x l Ÿ (3.4) 2! d! - d l x - x l 2 Ÿ q (x ). (3.5)! -!- Lemma 3., yields (3.4) immediately. Also, (3.5) holds for = Š. Theorem 3. Let n 2 (i) e be a convex set and Y (x), (ii) d in (2.)-(2.3) satisfy 2 m, 2 n ² ² Ÿ d Ÿ M ² ², a, Á, _ M m, (iii) {! } be any sequence converging to unity, with [, - ], -!!! [, 2).
5 - (23) NLP 5 - Then there exists a 7 (, ] that satisfies (2.5) or (2.6) and hence the sequence {x } comuted by (2.4) generates a monotonically decreasing sequence { Y(x )}. IF YOU UNDERSTAND THE DESCENT PROPERTY OF LEMMA 3. THE PROOF IS EASY TO ESTABLISH. Remar. Second order Taylor series exansion of a function f : n Ä, and f 2, for any x, d 8, ), is given by f (x ) d) œ f (x) + ) f f(x), d 2 2 ) Š ' ( t) d, f f (x + t ) d) (d) dt. Using the second order Taylor series exansion, Y(x ) Y(x ) Y(x ), x x x x, d + œ f + + Š x + x dt ' ( t) x x, (x (t) + š d Š d Š x + x (3.7) 2 2 Ÿ 7 q (x ) 7 lx x l (3.8) where x (t) œ x t (x x +), d(.) is the Hessian of Y at (.), = ' ( - t) ² d(x (t) d ² dt, and (3.3) is alied to obtain (3.8). From (3.5) and (3.8), we have Y (x ) Y (x ) Ÿ 7 q (x ) Š (3.9) + Since 3, there is a 7 (, ] such that 7 Ÿ. (3.) 3 m By Lemma (3.3), q (x ) Ÿ. Thus, (2.5) holds for this 7. Suose 7 is the largest 7 Ò!,! Ó satisfying inequality (2.5). All 7 Ÿ 7 also satisfy this condition and the selected 7!! Ò7, 7 Ó. It follows that š Y(x ) is monotonically decreasing. -! 7 m -!! yield To show that š Y (x ) is monotonically decreasing for (2.6), we use (3.) with (3.7) to 7! 7! + 2 m Y (x ) Y (x ) Ÿ 7 f Y(x ), x x Š
6 - (23) NLP 6 -!- 2 2 Since 3 Ÿ, there is a 7 (, ] such that (3.) 7! 7!! m 2 Ÿ Ÿ. (3.2) By Lemma (3.), fy(x ), x x Ÿ. Thus, (2.6) holds for this 7. Suose 7 is the largest 7 Ò!, Ó satisfying inequality (2.6). All 7 Ÿ 7! also satisfy this condition!! and the selected 7 Ò7, 7 Ó. It follows that š Y(x ) is monotonically decreasing. THEOREM 3.2 BELOW IS AN EXAMPLE FOR THE USE OF THE BOLZANO-WEIERSTRASS THEOREM. YOU ARE NOT RESPONSIBLE FOR ANY PROOFS FROM THIS POINT ON IN THIS SET OF NOTES. Theorem 3.2 Let (i) the assumtions of Theorem 3. be satisfied, (ii) the set = š x e ¹ Y (x) Ÿ Y (x ) be bounded.! and Then, we have Ä_ Ä_ Y q (x ) œ (3.3) f (x ), x x =. (3.4)! Given 3, by (3.9), the choice 7 = min š, - always satisfies m ( - 3 ) the stesize strategy (2.5). Clearly, 7, chosen as 7 œ j, as discussed above, is in the range!! 2 7 Ò7, 7 Ó and thereby also satisfies (2.5). As Y (x), and is comact, there is a scalar M - such that M. - -! _ Ÿ Similarly, given 32, (2.6) is satisfied by - 7 = min, ( 3 )!- š Š Š +. As m,!-, -, we have established that there 2 m is an % such that the stesizes determined by (2.5) or (2.6) satisfy 7 %, a. The boundedness of Y on and q (x ) Ÿ, imly, in the case of (2.5), that Ÿ 3! 7 ± q (x ) ± Ÿ! Y (x ) Y (x ) _. + Since 7 %, this yields (3.3). (3.4) is established similarly by using (2.6) and f Y(x ), x x Ÿ. Lemma 3.4 If (3.3) or if (3.4) are satisfied, then x x. (3.5) Ä_ The result follows from Lemmas (3.3) and (3.4) for (3.3) and (3.4) resectively. Theorem 3.3!
7 - (23) NLP 7 - Let (i) the assumtions of Theorem 3. be satisfied, (ii) the assumtions of Theorem 3.2 be satisfied, (iii) h (x) be once differentiable, (iv) the active constraint gradients at x are linearly indeendent Šalternatively, instead of the linear indeendence condition, it can be assumed that the multiliers associated with (2.3) at x are bounded (Fiacco and McCormic, 968) Then, (a) the algorithm in Section 2, with stesize strategy (2.5) or (2.6), generates a sequence {x } that converges to x, and (b) if, furthermore, strict comlementarity holds at the solution of subroblem (2.3), for large, redicts the active inequality constraints at x.. By Theorem 3., the algorithm ensures the decrease of Y (x) at each iteration, thereby ensuring x, for comact. Hence, { x } has a it oint, x. Without loss of generality, we can tae {x } Ä x. To show that x also satisfies the necessary conditions for otimality for roblem (.), we consider the otimality of subroblem (2.3). Let. + and f h denote resectively the multiliers of the subroblem and the matrix whose columns are the constraint gradients at x. Using (2.2) and (2.3), the conditions for (2.3) are given by f (x ) Y d (x x ) f h. œ (3.6, a) +! + + h (x ) Ÿ ; h (x ),. = ;.. (3.6, b) Ä_ Ä_ Using (3.5), we have x œ x œ x and f Y (x ) f h. œ. To show (b) we note that, in view of the last two conditions in (3.6, b), for sufficiently large and with strict comlementarity holding, none of the inactive constraints, i.e. j j h ;. œ (3.) + j j are redicted to be active at x, h (x ),. œ. 4. UNIT STEPSIZES AND SUPERLINEAR CONVERGENCE RATES We consider the convergence of 7 to unity. Couled with! Ä, this leads to the Q-suerlinear convergence rate of the algorithm. It is shown that both the convergence 7 to and suerlinear convergence are deendent on the Hessian aroximation d. The algorithm uses strictly ositive definite d even when the Hessian at the solution is not strictly ositive definite. First, we establish the subsace in which d needs to aroximate the Hessian of Y (x). This is used in Theorem 4. to establish the attainment of unit stesizes and in Theorem 4.2 to establish the convergence rate.
8 - (23) NLP 8 - A consequence of Theorem 3.3 is that, for sufficiently large, with strict comlementarity holding,. redicts the constraints active at x. Thus, for large, the constraints satisfied as strict inequalities at x do not affect the comutation of x. At that stage, it would mae no difference if the constraints satisfied as equalities at x were treated j j j as equality constraints. Hence, for large, h (x) = imlies h (x ) = and h (x) j imlies h (x ). Let h denote the vector constraints satisfied as equalities at x. Thus, we have h (x ) =, for large. In order to characterize the subsace for aroximating the Hessian of Y (x), we need to invoe a secial case of the mean-value theorem that holds for h (Ortega and Rheinboldt, 97). Remar: Mean-Value Theorems We note that mean value theorems usually aly to maings f: 8 Ä and do not in general hold for maings f: 8 Ä, m. We construct the result by treating each element of the vector h individually. Provided each element of the vector h is differentiable, on an oen convex set, then for any two oints x +, x, there exist t, t 2,..., t j,... (, ) such that [(x, x ) œ f h Š x + t (x x ), f h 2 Š x + t (x x ),... (4.) where f h j denotes the gradient of the j th element of h, and h (x ) h (x ) œ [ T (x, x ) Š x x. (4.2) Let the matrix [ have the same ran as [ (x +, x ) and let a linearly indeendent subset of the columns of [ (x +, x ) form the columns of [. Thus [ is of full ran and we can form T T the oerator P = I [ ([ [ ) [, which satisfies P P œ P and P Š x x œ x x. (4.3) + + As {x } Ä x, 2 š [ (x, x ) Ä f h (x ), fh (x ),... (4.4) + and P becomes the oerator rojecting vectors in n onto the active constraints at x. Theorem 4. Let the assumtions of Theorem 3.3 be satisfied. Then, there is a 5, such that (a) or, (b) for large, ½Š d d Š x + x ½ l x x l Ÿ 5, (4.5) + ½P Š d d P Š x + x ½ l x x l Ÿ 5, (4.6) +
9 - (23) NLP 9 - we have { 7 } Ä. Remar. Inequality (4.5) requires d to be close to d. This may be satisfied for d and d strictly ositive definite or for some d ositive definite and d not ositive definite. If d is not ositive definite, and there is no d satisfying (4.5), we need to consider (4.6). The latter demands only the rojections of d, d, on the constraints, to be close and the rojection of to be ositive definite. d For stesize (2.5), we can write (3.7) as Y (x ) Y (x ) Ÿ 7 q (x ) + ' ( t) x x, (x (t) + + šdš d d d Š x+ x dt (4.7) 2 l x x x d d x Ÿ 7 q(x) 7 lx x l l ½Š Š ½ (4.8) 7 ½Š d d Š x x ½ + Ÿ 7 q (x ) m x x (4.9) - l l! + where œ ' ( - t) ² d Š (x (t) d ² dt and (4.9) is obtained by invoing Lemma (3.3). The scalar 3 (, ) in (2.5) requires 7 to satisfy 7 ½Š d d Š x x ½ + 3 Ÿ m x x Ÿ. (4.)!- l l + As in (3.9)-(3.), there exists a 7 (, ], satisfying (4.9) and hence (2.5). If 5 in (4.5) is such that m! - 5 Ÿ 3 (4.) (in view of {x } Ä x, Ä, this defines the number 5) then (4.) holds with 7=, and therefore, because q (x ) Ÿ, (2.5) is satisfied with 7 =. If (4.5) cannot be achieved because d is not ositive definite, then the rojection of d d can be used, for large, by invoing (4.4) in (4.8) to yield 7 Y(x +) Y(x ) Ÿ 7 q (x ) m x x. - l l! ½P Š d d P Š x x ½ + +
10 - (23) NLP - (4.2) Using (4.6) and the same arguments as before, we establish that (2.5) is satisfied with =. 7 For stesize (2.6), (3.7) with Lemma (3.) yields + Y (x ) Y (x ) Ÿ 7 f Y(x ), x x!7 -!7 - ½Š d d Š x x ½ + + m lx x l!- 2 The scalar 3 Š, in (2.6) requires 7 to satisfy (4.3)!7 -!7 - ½Š d d Š x x ½ + 32 Ÿ m lx x l Ÿ. +!- (4.4) As in (3.)-(3.2), there exists a 7 (, ], satisfying (2.6). If 5 in (4.5) is such that! - -! m 5 Ÿ 3 (4.5) (since {x } Ä x, Ä, this defines 5) then (4.4) holds with 7=, and therefore, because fy(x ), x x Ÿ, (2.6) is satisfied with 7 =. If (4.5) cannot be achieved because d is not ositive definite, then the rojection of d d can be used, for large, by invoing (4.4) to yield + Y (x ) Y (x ) Ÿ 7 f Y(x ), x x!7 -!7 - ½P Š d d P Š x x ½ + m lx x l. + (4.6) Using (4.6) and the same arguments as before, we establish that (2.6) is satisfied with 7 =. Remar. The above roof illustrates that it is easier to attain 7 = for smaller values of -!. { 7 } will accelerate towards unity as {! } Ä. Furthermore, if 7 = while!, then reducing! to unity in subsequent iterations will increase - to its largest value -!! =. Consider, for examle, the effect of doing this in (4.) or (4.7). As! - gets larger, the bound m! - on these inequalities also gets larger. The same also alies to the bound on 5. Thus, conditions (4.5) and (4.6) become easier to satisfy and this maes it easier to maintain 7 =.
11 - (23) NLP - The left sides of bounds (4.5) and (4.6) are exected to aroach zero if suerlinear convergence is to be achieved. Theorem 4.2 below establishes these suerlinear convergence conditions. In chater 8, this discussion is revisited and further generalised for Quasi-Newton algorithms, following the earlier results of Dennis and More (977), Han (976) and Powell (978, a), among others. Lemma 4. Let {x } Ä x and lx x l Ÿ e lx x l, for some e [, _). Then where e = + e. + lx x l Ÿ e l x x l. (4.7) + The roof is immediate from the triangle inequality lx x x x l Ÿ l x x l l x x l Ÿ ( + e) lx x l. + + Remar. The hyothesis lx x + l Ÿ e l x x l is always satisfied whenever (4.7) is invoed below. Definition. Let the sequence {x } Ä x. If lx x+l Ä_ lx x l = (4.8) then {x } is convergent at a Q-suerlinear rate. Lemma 4.2 Let {x } Ä x. Then {x } is Q-suerlinearly convergent, i.e. () () lx x l Ÿ r l x x l ; r =, + iff lx + x l Ÿ r l x x - l with Ä_ r =. We have ² x x ²Ÿ! ² x x ² t- tä_ j+ j j= $ Ä_ Ÿ r ² x x - ² Ð = = =... Ñ r - - Ÿ = š ² x x ² ² x x ²
12 - (23) NLP 2 - for some = Ò, Ñ and sufficiently large. As Ö r Ä, = is chosen such that r + =, a K!. K! is an integer and is such that r, a K!. Rearranging the above exression, yields the required result. () () Suose that ² x x ² Ÿ r ² x x - ², Ä_ r =. The desired result () is obtained using Lemma (2.) and r () r - - r - ² x x ² Ÿ r () š ² x x ² + ² x x ² Ÿ Š () ² x x ². Theorem 4.2 Let (i) the assumtions of Theorem 3.3 be satisfied, (ii) be large, such that, by Theorem 4.,! = 7 =. Then, the sequence {x } generated by the GLP algorithm in Section 2 converges at a Q-suerlinear rate iff ½P Š d d P Š x + x ½ Ä_ l x x l + The first order exansion of fy (x) can be written using (4.4) as fy (x ) = (x ) fy - d Š x x- œ (4.9) ' šd Š (x (t) - d - P - Š x x - dt. (4.2) The gradient of the objective function in (2.3) is given by the first two terms on the right in (4.2). Thus, for x, the inequality + e fy (x ) - d Š x x -, x + x œ q -(x ), x + x (4.2) follows from the otimality of x in (2.3) for - min š ¾ x x - d- f Y (x -) ¾» x e. 2 d - Also, the inequality q (x ) fy (x ), x x (4.22) + + follows from the ositive definiteness of d. For large, using (2.4) with! = 7 =, (4.2)- (4.22) and Lemma 3.3, we have m + + l x x l Ÿ q (x )
13 - (23) NLP 3 - Ÿ fy (x ), x + x Ÿ q -(x ), x + x x x, P š d d P Š x x ' x x, šd Š (x (t) d Š x x dt Ÿ - ½ P š d d- P - Š x x - ½ lx x l - lx x l lx x l (4.23) where = ' ½d Š (x (t) d ½ dt and, as {x } Ä x, Ö } Ä. Hence, (4.23) yields where - lx x l Ÿ r lx x l + - r = P - 2 š d d- P - Š x x- ½ m lx x l - ½ - ½Š P P š d d P Š x x ½ lx x l Ÿ. - If (4.9) is satisfied, Ä_ r = and Lemma 4.2 yields the desired result. Suose, conversely, that Ö x converges Q-suerlinearly and thence, by Lemma 4.2, ² x x ² Ÿ r ² x x ² with r œ. By (4.23), we have (4.9). + - Ä_ REFERENCES Allwright, J.C. (98). A Feasible Direction Algorithm for Convex Otimization: Global Convergence Rates, J Otim Theory Al, -8. Aostol, T.M. (98). Mathematical Analysis, Second Edition, Addison Wesley, Reading, Massachusetts. Bertseas, D.P. (976). On the Goldstein-Levitin-Polya Gradient Projection Method, IEEE Trans on Automatic Control, AC-2,
14 - (23) NLP 4 - Bertseas, D.P. (982). Projected Newton Methods for Otimization Problems with Simle Constraints, SIAM J Control Otim, 2, Broyden C.G. (969). A New Method forsolving Nonlinear Simultaneous Equations, Comuter J., 2, 95-. Broyden C.G. (97). The Convergence of a Class of Double-Ran Minimisation Algorithms 2. The new Algorithm, J. Inst. Maths. Alics., 6, Demyanov, V.F. and A.M. Rubinov (97). Aroximate Methods in Otimization Problems, American Elsevier, New Yor. Dennis, J.E. and J.J. More (977). Quasi-Newton Methods, Motivation and Theory, SIAM Review, 9, Dunn, J.C. (979). Rates of Convergence for Conditional Gradient Algorithms Near Singular and Nonsingular Extremals, SIAM J Control Otim, 7, Dunn, J.C. (98). Newton's Method and the Goldstein Ste-length Rule for Constrained Minimization Problems, SIAM J Control Otim, 8, Dunn, J.C. (98). Global and Asymtotic Convergence Rate Estimates for a Class of Projected Gradient Processes, SIAM J Control Otim, 9, Fiacco, A.V. and G.P. McCormic (968). Nonlinear Programming: Sequential Unconstrained Minimization Techniques, Wiley, New Yor. Fletcher, R. (97). A New Aroach to Variable Metric Algorithms, Comuter J., 3, Gafni, E. and D. Bertseas (984). Two-Metric Projection Methods for Constrained Otimization, SIAM J. Control and Otimization, 22, Goldfarb D. (97). A Family of Variable Metric Algorithms Derived by Variational Means, Mathematics of Comutation, 24, Goldstein, A.A. (964). Convex Programming in Hilbert Sace, Bulletin AMS, 7, Han, S-P (976). Suerlinearly Convergent Variable Metric Algorithms for General Nonliner Programming Problems, Math Programming,, Levitin, E.S. and B.T Polya (966). Constrained Minimization Methods, USSR Com Math and Math Phys, 6, -5. McCormic, G.P. and R.A Taia (972). The Gradient Projection Method Under Mild Differentiability Conditions, SIAM J Control Otim,, 93-98
15 - (23) NLP 5 - Ortega, J.M. and W.C. Rheinboldt (97). Iterative Solution of Non-linear Equations in Several Variables, Academic Press, London and New Yor Pola, E. (97). Comutational Methods in Otimization, Academic Press, London and New Yor Powell, M.J.D. (978, a). The Convergence of Variable Metric Methods for Nonlinearly Constrained Otimization Calculations, In: J.B. Rosen, O.L. Mangasarian, K. Ritter (eds). Nonlinear Programming, Academic Press, New Yor:27-63 Powell, M.J.D. (978, b). A fast Algorithm for Nonlinearly Constrained Otimization Calculations, Numerical Analysis Proceedings, Dundee 977, Edited by G.A. Watson, Sringer-Verlag, Berlin. Psenichny, B.N. and Y.M. Danilin (978). Publishers, Moscow Numerical Methods in Extremal Problems. Mir Rustem, B. (984). A Class of Suerlinearly Convergent Projection Algorithms with Relaxed Stesizes, Al Math Otim, 2, Shanno, D.F. (97). Conditioning of Quasi-Newton Methods for Function Minimization, Mathematics of Comutation, 24,
MATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More information1 Extremum Estimators
FINC 9311-21 Financial Econometrics Handout Jialin Yu 1 Extremum Estimators Let θ 0 be a vector of k 1 unknown arameters. Extremum estimators: estimators obtained by maximizing or minimizing some objective
More informationOn Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law
On Isoerimetric Functions of Probability Measures Having Log-Concave Densities with Resect to the Standard Normal Law Sergey G. Bobkov Abstract Isoerimetric inequalities are discussed for one-dimensional
More informationResearch Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces
Abstract and Alied Analysis Volume 2012, Article ID 264103, 11 ages doi:10.1155/2012/264103 Research Article An iterative Algorithm for Hemicontractive Maings in Banach Saces Youli Yu, 1 Zhitao Wu, 2 and
More informationApproximating min-max k-clustering
Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost
More informationOn Wald-Type Optimal Stopping for Brownian Motion
J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of
More informationk- price auctions and Combination-auctions
k- rice auctions and Combination-auctions Martin Mihelich Yan Shu Walnut Algorithms March 6, 219 arxiv:181.3494v3 [q-fin.mf] 5 Mar 219 Abstract We rovide for the first time an exact analytical solution
More informationANALYTIC NUMBER THEORY AND DIRICHLET S THEOREM
ANALYTIC NUMBER THEORY AND DIRICHLET S THEOREM JOHN BINDER Abstract. In this aer, we rove Dirichlet s theorem that, given any air h, k with h, k) =, there are infinitely many rime numbers congruent to
More informationp-adic Measures and Bernoulli Numbers
-Adic Measures and Bernoulli Numbers Adam Bowers Introduction The constants B k in the Taylor series exansion t e t = t k B k k! k=0 are known as the Bernoulli numbers. The first few are,, 6, 0, 30, 0,
More informationHENSEL S LEMMA KEITH CONRAD
HENSEL S LEMMA KEITH CONRAD 1. Introduction In the -adic integers, congruences are aroximations: for a and b in Z, a b mod n is the same as a b 1/ n. Turning information modulo one ower of into similar
More informationFeedback-error control
Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller
More informationPositive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application
BULGARIA ACADEMY OF SCIECES CYBEREICS AD IFORMAIO ECHOLOGIES Volume 9 o 3 Sofia 009 Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Alication Svetoslav Savov Institute of Information
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More informationarxiv: v2 [math.na] 6 Apr 2016
Existence and otimality of strong stability reserving linear multiste methods: a duality-based aroach arxiv:504.03930v [math.na] 6 Ar 06 Adrián Németh January 9, 08 Abstract David I. Ketcheson We rove
More informationIntroduction to Banach Spaces
CHAPTER 8 Introduction to Banach Saces 1. Uniform and Absolute Convergence As a rearation we begin by reviewing some familiar roerties of Cauchy sequences and uniform limits in the setting of metric saces.
More informationElementary theory of L p spaces
CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )
More informationVarious Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems
Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various
More informationSums of independent random variables
3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where
More informationOn Z p -norms of random vectors
On Z -norms of random vectors Rafa l Lata la Abstract To any n-dimensional random vector X we may associate its L -centroid body Z X and the corresonding norm. We formulate a conjecture concerning the
More informationGOOD MODELS FOR CUBIC SURFACES. 1. Introduction
GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in
More information#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS
#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS Norbert Hegyvári ELTE TTK, Eötvös University, Institute of Mathematics, Budaest, Hungary hegyvari@elte.hu François Hennecart Université
More informationON THE NORM OF AN IDEMPOTENT SCHUR MULTIPLIER ON THE SCHATTEN CLASS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 000-9939XX)0000-0 ON THE NORM OF AN IDEMPOTENT SCHUR MULTIPLIER ON THE SCHATTEN CLASS WILLIAM D. BANKS AND ASMA HARCHARRAS
More informationUse of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek
Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.
More informationNonlinear programming
08-04- htt://staff.chemeng.lth.se/~berntn/courses/otimps.htm Otimization of Process Systems Nonlinear rogramming PhD course 08 Bernt Nilsson, Det of Chemical Engineering, Lund University Content Unconstraint
More informationA Note on the Positive Nonoscillatory Solutions of the Difference Equation
Int. Journal of Math. Analysis, Vol. 4, 1, no. 36, 1787-1798 A Note on the Positive Nonoscillatory Solutions of the Difference Equation x n+1 = α c ix n i + x n k c ix n i ) Vu Van Khuong 1 and Mai Nam
More informationIMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES
IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES OHAD GILADI AND ASSAF NAOR Abstract. It is shown that if (, ) is a Banach sace with Rademacher tye 1 then for every n N there exists
More informationSYMPLECTIC STRUCTURES: AT THE INTERFACE OF ANALYSIS, GEOMETRY, AND TOPOLOGY
SYMPLECTIC STRUCTURES: AT THE INTERFACE OF ANALYSIS, GEOMETRY, AND TOPOLOGY FEDERICA PASQUOTTO 1. Descrition of the roosed research 1.1. Introduction. Symlectic structures made their first aearance in
More informationThe inverse Goldbach problem
1 The inverse Goldbach roblem by Christian Elsholtz Submission Setember 7, 2000 (this version includes galley corrections). Aeared in Mathematika 2001. Abstract We imrove the uer and lower bounds of the
More informationSpectral Properties of Schrödinger-type Operators and Large-time Behavior of the Solutions to the Corresponding Wave Equation
Math. Model. Nat. Phenom. Vol. 8, No., 23,. 27 24 DOI:.5/mmn/2386 Sectral Proerties of Schrödinger-tye Oerators and Large-time Behavior of the Solutions to the Corresonding Wave Equation A.G. Ramm Deartment
More informationVARIANTS OF ENTROPY POWER INEQUALITY
VARIANTS OF ENTROPY POWER INEQUALITY Sergey G. Bobkov and Arnaud Marsiglietti Abstract An extension of the entroy ower inequality to the form N α r (X + Y ) N α r (X) + N α r (Y ) with arbitrary indeendent
More informationOn the Chvatál-Complexity of Knapsack Problems
R u t c o r Research R e o r t On the Chvatál-Comlexity of Knasack Problems Gergely Kovács a Béla Vizvári b RRR 5-08, October 008 RUTCOR Rutgers Center for Oerations Research Rutgers University 640 Bartholomew
More informationCR extensions with a classical Several Complex Variables point of view. August Peter Brådalen Sonne Master s Thesis, Spring 2018
CR extensions with a classical Several Comlex Variables oint of view August Peter Brådalen Sonne Master s Thesis, Sring 2018 This master s thesis is submitted under the master s rogramme Mathematics, with
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationCommutators on l. D. Dosev and W. B. Johnson
Submitted exclusively to the London Mathematical Society doi:10.1112/0000/000000 Commutators on l D. Dosev and W. B. Johnson Abstract The oerators on l which are commutators are those not of the form λi
More informationAnalysis of some entrance probabilities for killed birth-death processes
Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction
More informationRecursive Estimation of the Preisach Density function for a Smart Actuator
Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator
More informationBest approximation by linear combinations of characteristic functions of half-spaces
Best aroximation by linear combinations of characteristic functions of half-saces Paul C. Kainen Deartment of Mathematics Georgetown University Washington, D.C. 20057-1233, USA Věra Kůrková Institute of
More informationAlmost 4000 years ago, Babylonians had discovered the following approximation to. x 2 dy 2 =1, (5.0.2)
Chater 5 Pell s Equation One of the earliest issues graled with in number theory is the fact that geometric quantities are often not rational. For instance, if we take a right triangle with two side lengths
More informationOn Doob s Maximal Inequality for Brownian Motion
Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t
More informationSome results of convex programming complexity
2012c12 $ Ê Æ Æ 116ò 14Ï Dec., 2012 Oerations Research Transactions Vol.16 No.4 Some results of convex rogramming comlexity LOU Ye 1,2 GAO Yuetian 1 Abstract Recently a number of aers were written that
More information4. Score normalization technical details We now discuss the technical details of the score normalization method.
SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules
More informationAdditive results for the generalized Drazin inverse in a Banach algebra
Additive results for the generalized Drazin inverse in a Banach algebra Dragana S. Cvetković-Ilić Dragan S. Djordjević and Yimin Wei* Abstract In this aer we investigate additive roerties of the generalized
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Alied Mathematics htt://jiam.vu.edu.au/ Volume 3, Issue 5, Article 8, 22 REVERSE CONVOLUTION INEQUALITIES AND APPLICATIONS TO INVERSE HEAT SOURCE PROBLEMS SABUROU SAITOH,
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationLocation of solutions for quasi-linear elliptic equations with general gradient dependence
Electronic Journal of Qualitative Theory of Differential Equations 217, No. 87, 1 1; htts://doi.org/1.14232/ejqtde.217.1.87 www.math.u-szeged.hu/ejqtde/ Location of solutions for quasi-linear ellitic equations
More informationHEAT AND LAPLACE TYPE EQUATIONS WITH COMPLEX SPATIAL VARIABLES IN WEIGHTED BERGMAN SPACES
Electronic Journal of ifferential Equations, Vol. 207 (207), No. 236,. 8. ISSN: 072-669. URL: htt://ejde.math.txstate.edu or htt://ejde.math.unt.edu HEAT AN LAPLACE TYPE EQUATIONS WITH COMPLEX SPATIAL
More informationLECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]
LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for
More informationApplied Mathematics and Computation
Alied Mathematics and Comutation 217 (2010) 1887 1895 Contents lists available at ScienceDirect Alied Mathematics and Comutation journal homeage: www.elsevier.com/locate/amc Derivative free two-oint methods
More informationε i (E j )=δj i = 0, if i j, form a basis for V, called the dual basis to (E i ). Therefore, dim V =dim V.
Covectors Definition. Let V be a finite-dimensional vector sace. A covector on V is real-valued linear functional on V, that is, a linear ma ω : V R. The sace of all covectors on V is itself a real vector
More informationNOTES. Hyperplane Sections of the n-dimensional Cube
NOTES Edited by Sergei Tabachnikov Hyerlane Sections of the n-dimensional Cube Rolfdieter Frank and Harald Riede Abstract. We deduce an elementary formula for the volume of arbitrary hyerlane sections
More informationSECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS
SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More information#A45 INTEGERS 12 (2012) SUPERCONGRUENCES FOR A TRUNCATED HYPERGEOMETRIC SERIES
#A45 INTEGERS 2 (202) SUPERCONGRUENCES FOR A TRUNCATED HYPERGEOMETRIC SERIES Roberto Tauraso Diartimento di Matematica, Università di Roma Tor Vergata, Italy tauraso@mat.uniroma2.it Received: /7/, Acceted:
More informationTowards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK
Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)
More informationJournal of Mathematical Analysis and Applications
J. Math. Anal. Al. 44 (3) 3 38 Contents lists available at SciVerse ScienceDirect Journal of Mathematical Analysis and Alications journal homeage: www.elsevier.com/locate/jmaa Maximal surface area of a
More informationMultiplicity of weak solutions for a class of nonuniformly elliptic equations of p-laplacian type
Nonlinear Analysis 7 29 536 546 www.elsevier.com/locate/na Multilicity of weak solutions for a class of nonuniformly ellitic equations of -Lalacian tye Hoang Quoc Toan, Quô c-anh Ngô Deartment of Mathematics,
More informationStochastic integration II: the Itô integral
13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the
More information2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized
BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,
More informationApproximation of the Euclidean Distance by Chamfer Distances
Acta Cybernetica 0 (0 399 47. Aroximation of the Euclidean Distance by Chamfer Distances András Hajdu, Lajos Hajdu, and Robert Tijdeman Abstract Chamfer distances lay an imortant role in the theory of
More informationIteration with Stepsize Parameter and Condition Numbers for a Nonlinear Matrix Equation
Electronic Journal of Linear Algebra Volume 34 Volume 34 2018) Article 16 2018 Iteration with Stesize Parameter and Condition Numbers for a Nonlinear Matrix Equation Syed M Raza Shah Naqvi Pusan National
More informationON THE SET a x + b g x (mod p) 1 Introduction
PORTUGALIAE MATHEMATICA Vol 59 Fasc 00 Nova Série ON THE SET a x + b g x (mod ) Cristian Cobeli, Marian Vâjâitu and Alexandru Zaharescu Abstract: Given nonzero integers a, b we rove an asymtotic result
More informationCOMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS
NCCI 1 -National Conference on Comutational Instrumentation CSIO Chandigarh, INDIA, 19- March 1 COMPARISON OF VARIOUS OPIMIZAION ECHNIQUES FOR DESIGN FIR DIGIAL FILERS Amanjeet Panghal 1, Nitin Mittal,Devender
More informationNotes on duality in second order and -order cone otimization E. D. Andersen Λ, C. Roos y, and T. Terlaky z Aril 6, 000 Abstract Recently, the so-calle
McMaster University Advanced Otimization Laboratory Title: Notes on duality in second order and -order cone otimization Author: Erling D. Andersen, Cornelis Roos and Tamás Terlaky AdvOl-Reort No. 000/8
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationEstimation of the large covariance matrix with two-step monotone missing data
Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo
More informationMATH 6210: SOLUTIONS TO PROBLEM SET #3
MATH 6210: SOLUTIONS TO PROBLEM SET #3 Rudin, Chater 4, Problem #3. The sace L (T) is searable since the trigonometric olynomials with comlex coefficients whose real and imaginary arts are rational form
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationOn the Square-free Numbers in Shifted Primes Zerui Tan The High School Attached to The Hunan Normal University November 29, 204 Abstract For a fixed o
On the Square-free Numbers in Shifted Primes Zerui Tan The High School Attached to The Hunan Normal University, China Advisor : Yongxing Cheng November 29, 204 Page - 504 On the Square-free Numbers in
More informationON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS
#A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,
More informationSystem Reliability Estimation and Confidence Regions from Subsystem and Full System Tests
009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract
More informationA Social Welfare Optimal Sequential Allocation Procedure
A Social Welfare Otimal Sequential Allocation Procedure Thomas Kalinowsi Universität Rostoc, Germany Nina Narodytsa and Toby Walsh NICTA and UNSW, Australia May 2, 201 Abstract We consider a simle sequential
More informationEXISTENCE AND UNIQUENESS OF SOLUTIONS FOR NONLOCAL p-laplacian PROBLEMS
Electronic Journal of ifferential Equations, Vol. 2016 (2016), No. 274,. 1 9. ISSN: 1072-6691. URL: htt://ejde.math.txstate.edu or htt://ejde.math.unt.edu EXISTENCE AN UNIQUENESS OF SOLUTIONS FOR NONLOCAL
More informationLecture 10: Hypercontractivity
CS 880: Advanced Comlexity Theory /15/008 Lecture 10: Hyercontractivity Instructor: Dieter van Melkebeek Scribe: Baris Aydinlioglu This is a technical lecture throughout which we rove the hyercontractivity
More informationThe analysis and representation of random signals
The analysis and reresentation of random signals Bruno TOÉSNI Bruno.Torresani@cmi.univ-mrs.fr B. Torrésani LTP Université de Provence.1/30 Outline 1. andom signals Introduction The Karhunen-Loève Basis
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationAN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.
More informationPETER J. GRABNER AND ARNOLD KNOPFMACHER
ARITHMETIC AND METRIC PROPERTIES OF -ADIC ENGEL SERIES EXPANSIONS PETER J. GRABNER AND ARNOLD KNOPFMACHER Abstract. We derive a characterization of rational numbers in terms of their unique -adic Engel
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationNumerical Linear Algebra
Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and
More informationA numerical implementation of a predictor-corrector algorithm for sufcient linear complementarity problem
A numerical imlementation of a redictor-corrector algorithm for sufcient linear comlementarity roblem BENTERKI DJAMEL University Ferhat Abbas of Setif-1 Faculty of science Laboratory of fundamental and
More informationSOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES
Kragujevac Journal of Mathematics Volume 411) 017), Pages 33 55. SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES SILVESTRU SEVER DRAGOMIR 1, Abstract. Some new trace ineualities for oerators in
More informationUniformly best wavenumber approximations by spatial central difference operators: An initial investigation
Uniformly best wavenumber aroximations by satial central difference oerators: An initial investigation Vitor Linders and Jan Nordström Abstract A characterisation theorem for best uniform wavenumber aroximations
More informationSECTION 5: FIBRATIONS AND HOMOTOPY FIBERS
SECTION 5: FIBRATIONS AND HOMOTOPY FIBERS In this section we will introduce two imortant classes of mas of saces, namely the Hurewicz fibrations and the more general Serre fibrations, which are both obtained
More informationarxiv:math/ v4 [math.gn] 25 Nov 2006
arxiv:math/0607751v4 [math.gn] 25 Nov 2006 On the uniqueness of the coincidence index on orientable differentiable manifolds P. Christoher Staecker October 12, 2006 Abstract The fixed oint index of toological
More information#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS
#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationPositive decomposition of transfer functions with multiple poles
Positive decomosition of transfer functions with multile oles Béla Nagy 1, Máté Matolcsi 2, and Márta Szilvási 1 Deartment of Analysis, Technical University of Budaest (BME), H-1111, Budaest, Egry J. u.
More informationEstimating function analysis for a class of Tweedie regression models
Title Estimating function analysis for a class of Tweedie regression models Author Wagner Hugo Bonat Deartamento de Estatística - DEST, Laboratório de Estatística e Geoinformação - LEG, Universidade Federal
More informationSearch Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationUniform Law on the Unit Sphere of a Banach Space
Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a
More informationPOINTS ON CONICS MODULO p
POINTS ON CONICS MODULO TEAM 2: JONGMIN BAEK, ANAND DEOPURKAR, AND KATHERINE REDFIELD Abstract. We comute the number of integer oints on conics modulo, where is an odd rime. We extend our results to conics
More informationExtremal Polynomials with Varying Measures
International Mathematical Forum, 2, 2007, no. 39, 1927-1934 Extremal Polynomials with Varying Measures Rabah Khaldi Deartment of Mathematics, Annaba University B.P. 12, 23000 Annaba, Algeria rkhadi@yahoo.fr
More information216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are
Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment
More informationLecture 3 January 16
Stats 3b: Theory of Statistics Winter 28 Lecture 3 January 6 Lecturer: Yu Bai/John Duchi Scribe: Shuangning Li, Theodor Misiakiewicz Warning: these notes may contain factual errors Reading: VDV Chater
More informationRotations in Curved Trajectories for Unconstrained Minimization
Rotations in Curved rajectories for Unconstrained Minimization Alberto J Jimenez Mathematics Deartment, California Polytechnic University, San Luis Obiso, CA, USA 9407 Abstract Curved rajectories Algorithm
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationA Note on Massless Quantum Free Scalar Fields. with Negative Energy Density
Adv. Studies Theor. Phys., Vol. 7, 13, no. 1, 549 554 HIKARI Ltd, www.m-hikari.com A Note on Massless Quantum Free Scalar Fields with Negative Energy Density M. A. Grado-Caffaro and M. Grado-Caffaro Scientific
More informationChapter 7: Special Distributions
This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli
More informationOn Nonlinear Polynomial Selection and Geometric Progression (mod N) for Number Field Sieve
On onlinear Polynomial Selection and Geometric Progression (mod ) for umber Field Sieve amhun Koo, Gooc Hwa Jo, and Soonhak Kwon Email: komaton@skku.edu, achimheasal@nate.com, shkwon@skku.edu Det. of Mathematics,
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More information