The iterative convex minorant algorithm for nonparametric estimation

Size: px
Start display at page:

Download "The iterative convex minorant algorithm for nonparametric estimation"

Transcription

1 The iterative convex minorant algorithm for nonparametric estimation Report Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics

2 ISSN Copyright c 995 by the Faculty of Technical Mathematics and Informatics, Delft, The Netherlands. No part of this Journal may be reproduced in any form, by print, photoprint, microfilm, or any other means without permission from the Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands. Copies of these reports may be obtained from the bureau of the Faculty of Technical Mathematics and Informatics, Julianalaan 32, 2628 BL Delft, phone A selection of these reports is available in PostScript form at the Faculty s anonymous ftp-site. They are located in the directory /pub/publications/tech-reports at ftp.twi.tudelft.nl

3 The iterative convex minorant algorithm for nonparametric estimation By Geurt Jongbloed Department of Mathematics Delft University of Technology Mekelweg CD Delft The Netherlands Abstract The problem of minimizing a smooth convex function over a basic cone in IR n is frequently encountered in nonparametric statistics. For that type of problem we suggest an algorithm and show that this algorithm converges to the solution of the minimization problem. Introduction Groeneboom & Wellner (992) introduce the iterative convex minorant (ICM) algorithm to compute nonparametric maximum likelihood estimators (NPMLE's) for distribution functions in some statistical inverse problems. Using the specic structure of the Interval Censoring Case II problem, Aragon & Eberly (992) show the ICM algorithm to be locally convergent under the assumption that the points of jump of the NPMLE are known in advance. Determining these points of jump is, however, the main part of the problem. In this paper we describe the ICM algorithm in its general form, show that it does not converge under mild conditions and propose a modied version that does converge under mild conditions. The ICM algorithm is taylored for minimizing a smooth convex function over one of the cones C or C + in IR n, which are dened by C = fx 2 IR n : x x 2 x n g and C + = fx 2 C : x 0g : () Although this problem might seem rather specic, it is a very general problem. Convex optimization problems over more general nitely generated closed convex cones K in IR n arising in statistics, can often be rewritten in terms of one of the cones C or C +. Examples of estimation problems where the algorithm can be applied to compute the NPMLE are the Interval Censoring Case II, Deconvolution, and Wicksell's problem (see also Jongbloed (995)). Another example where it can be applied, maximum likelihood estimation of a convex decreasing density, is given in section 5. Also least squares estimators for a convex regression function can be computed by means of the ICM algorithm. 0 AMS 99 subject classications. primary 65U05, secondary 62G05. 0 Key words and phrases. global convergence, inverse problems, isotonic regression.

4 For some of these examples also other algorithms have been proposed. For censoring problems, the Expectation Maximization (EM) algorithm (see e.g. Dempster et al. (977) and Wu (983) for a convergence proof of this algorithm) is frequently used. The experience with this algorithm is that it converges rather slowly to the solution of the optimization problem. Recently, for censoring problems, a combination of the EM and ICM algorithm is proposed in Zhan & Wellner (995). Simulation results indicate this hybrid algorithm to behave very well for the double censoring model. Also general optimization techniques such as Interior Point Methods have been applied to some statistical estimation problems. See e.g. Terlaky & Vial (995). Some known results from optimization theory and the theory of isotonic regression are reviewed in section 2. In section 3 we show that in general the ICM algorithm does not converge to the solution of the minimization problem. We also give a modied ICM algorithm in pseudo code. For this modied algorithm we prove a global convergence result in section 4. Finally, in section 5, we compute the maximum likelihood estimator of a convex and decreasing density on [0; ), using the modied ICM algorithm. Additionally, a useful lemma is proved which states that the ICM algorithm can also be used to maximize loglikelihood-type functions over the intersection of a closed convex cone and a hyperplane in IR n. 2 Review of some known results Let K be a cone in IR n and satisfy Condition : IR n! (?; ] is () convex and attains its minimum over K at a unique point ^x, (2) continuous, (3) continuously dierentiable on the set fx 2 IR n : (x) < g. Writing r for the vector of partial T r(x) = (x); ; n and (; ) for the usual inner product in IR n, it is known from e.g. Robertson et al. (988) section 6.2, that ^x = argmin (x) x2k if and only if ^x 2 K satisfying 8x 2 K : (x; r(^x)) 0 and (^x; r(^x)) = 0: (2) Taking for K one of the cones C or C + as dened in () and for the function q(x) = 2 (x? y)t W (x? y) 2

5 for some xed y 2 IR n and positive denite diagonal matrix W = diag(w i ), the optimality conditions in (2) have a nice geometric interpretation. Indeed, for K = C, ^x i is the left derivative of the convex minorant of the cumulative sum diagram consisting of the points P 0 = (0; 0) and P j = 0 j l= w l ; jx l= w l y l A for j n evaluated at the point P i. If K = C +, the only dierence is that the negative components of ^x should be changed to zero. The geometric interpretation of the optimality conditions when the object function is a quadratic form with a diagonal matrix of second order derivatives and the cone is C or C +, is the back bone of the theory of isotonic regression as it can be found in Barlow et al. (972) and Robertson et al. (988). Let x (0) 2 C be xed and let k = 0. The idea behind the ICM algorithm is then to approximate the convex function locally near x (k) by a quadratic form of the type q(x; x (k) ) = 2 x? x (k) + W (x (k) )? r(x (k) ) T W (x (k) ) x? x (k) + W (x (k) )? r(x (k) ) where W (x (k) ) is a positive denite diagonal matrix depending on x (k). The next iterate x (k+) is then dened as the minimizer of q(; x (k) ) over C. Incrementing k with one and repeating the procedure gives the iterative algorithm. Since x (k+) can be determined by taking the derivative of the convex minorant of the cumulative sum diagram consisting of the points P 0 = (0; 0) and P j = 0 j l= w (k) l ; jx l= w (k) l x l (x (k) ) A for j n; where w (k) l denotes the l-th diagonal entry of W (x (k) ), the name iterative convex minorant algorithm is justied. 3 Description of the algorithm In this section K denotes one of the cones C and C + rather than a general cone in IR n and satises condition. An iterative optimization algorithm to approximate ^x = argmin (y); y2k is properly specied by an initial point x (0) 2 K, an algorithmic map A and a termination criterion. An algorithmic map is a mapping x 7! A(x) dened on K and taking values in the class of nonempty subsets of K. The algorithm can then be formulated as: k := 0; while the stopping criterion is not satised, x (k+) 2 A(x (k) ) and k := k +. Once a continuous mapping x 7! W (x) from K to the class of positive denite diagonal matrices (equipped with its usual matrix norm) is specied, the algorithmic map associated with the ICM algorithm is given by B(x) = argmin y2k T y? x + W (x)? r(x) W (x) y? x + W (x)? r(x) ; 2 3 ;

6 where we adopt the convention to leave out the curly brackets when the set returned by an algorithmic map is a singleton. Note that, by the continuity of x 7! W (x) and Condition, the mapping B is continuous at each point x where (x) <. Taking (x) = x 2? x + 2 x x (x2 + x 2 2); K = C, x (0) = (; ) T and W I, the identity matrix, it follows that an algorithm based on B does in general not converge. (Indeed, x (k) = (; ) T for k even and x (k) = (?;?) T for k odd in this example.) Moreover, it may happen that the value of at some iterate is innite, so that the algorithm is not even well dened. However, and this we will use when we dene the modied ICM algorithm, the algorithmic map B generates a direction of descent for at each x 2 K n f^xg such that (x) <. This result is stated in Lemma. Lemma Let satisfy Condition and x 2 K n f^xg satisfy (x) <. Then for all > 0 suciently small. (x + (B(x)? x)) < (x) Proof: Fix x 2 K n f^xg with (x) < and dene the function on [0; ] as follows: () = (x + (B(x)? x)): It suces to show that the right derivative of at zero, 0 (0) = (B(x)? x) T r(x); is strictly negative. From the denition of B(x) and the fact that x 2 K, it follows by (2) that and Subtracting (4) from (3) we see that (B(x); W (x)(b(x)? x) + r(x)) = 0 (3) (x; W (x)(b(x)? x) + r(x)) 0: (4) (B(x)? x; W (x)(b(x)? x)) + 0 (0) 0: (5) Note that the assumption x 6= ^x implies that x 6= B(x). Therefore, since W (x) is positive denite, the rst term at the left hand side of (5) is strictly positive, so that 0 (0) < 0 as was to be shown. 2 Using lemma we can construct an algorithm that converges to ^x. The idea behind this modied iterative convex minorant algorithm is to select a point x (k+) from the segment n o seg(x (k) ; B(x (k) )) = x (k) + (B(x (k) )? x (k) ) : 2 [0; ] 4

7 such that the value of decreases suciently when moving from x (k) to x (k+). One way to formalize this idea is to dene the algorithmic map C C(x) = 8 >< >: fb(x)g if (B(x)) < (x) + (? )r(x) T (B(x)? x) fy 2 seg(x; B(x)) : (? )r(x) T (y? x) (y)? (x) r(x) T (y? x)g elsewhere, where 2 (0; =2) is xed. See Figure for the idea behind the denition of C. (6) Figure : The three possible forms of the set returned by the algorithmic map C in the parametrization () = (x + (A(x)? x)). To completely specify the algorithm we should x an initial point for the algorithm, a rule to determine x (k+) from C(x (k) ) and a termination criterion. As an initial point we take any x (0) 2 K with (x (0) ) <. As a rule to choose x (k+) from C(x (k) ) we propose to choose x (k+) = B(x (k) ) whenever it belongs to C(x (k) ), and otherwise perform a binary search for an element of C(x (k) ) in the segment seg(x (k) ; B(x (k) )). See the pseudo code below for an exact description of this binary search, which can easily be seen to terminate after a nite number of steps. Finally, we base our stopping criterion on (2), where we use that for C the inequality part of (2) is equivalent to the i (^x) ( 0 for j n = 0 for j = : Below we give a formal description of the algorithm obtained in this way (K = C). Modied iterative convex minorant algorithm Input: > 0: accuracy parameter; 2 (0; =2): line search parameter; x (0) 2 K: initial point satisfying (x (0) ) < ; 5

8 begin x := x (0) ; while j P n i= i (x)j > or min ni=j jn P i (x)j > or j i= begin ~y := argmin y2k (y? x + W (x)? r(x)) T W (x)(y? x + W (x)? r(x)); if (~y) < (x) + r(x) T (~y? x) then x := ~y else begin := ; s := =2; z := ~y; while (z) < (x) + (? )r(x) T (z? x) (I) or (z) > (x) + r(x) T (z? x) (II) do begin if (I) then := + s; if (II) then :=? s; z := x + (~y? x); s := s=2; end; x := z; end; i (x) <? do If the algorithm is used to minimize over C +, the C should be replaced by C + throughout the algorithm and the second condition in the rst while statement should be removed. In the next section we prove that under mild conditions the modied ICM algorithm generates a sequence x (k) such that x (k)! ^x for k!. 4 Convergence of the modied ICM algorithm To prove the modied ICM algorithm to converge to the point ^x we will use a general convergence theorem (cf. Bazaraa et al. (993), theorem or Zangwill (969), page 9; curiously enough, this theorem is also used in Wu (983) to prove global convergence of the EM algorithm). This theorem assures convergence of the algorithm based on an algorithmic map A under three conditions. The rst is that the sequence of iterates generated by the algorithm is contained in a compact subset K of K. The second is that there exists a descent function, which is a continuous function on K such that (y) < (x) for all y 2 A(x), whenever x 6= ^x. The third condition is that the algorithmic map A is closed. This means that if (x k ) and (y k ) are sequences in K satisfying x k! x, y k 2 A(x k ) and y k! y, then y 2 A(x) necessarily. Theorem Let the function : IR n! (?; ] satisfy Condition and x (0) 2 K satisfy (x (0) ) <. Let the mapping x 7! W (x) take values in the set of positive denite (n n) 6

9 diagonal matrices such that x 7! W (x) is continuous on the set K = fx 2 K : (x) (x (0) )g: (7) Then an algorithm generated by the mapping C, as dened in (6), converges to ^x. Proof: From lemma it follows that the mapping C is well dened and has as a descent function: for all x 6= ^x and for all y 2 C(x), (y) < (x). From this observation it follows that fx (k) : k 0g K; where K is as dened in (7). From Condition () and (2) and the fact that (x (0) ) <, it follows that K is compact. Therefore, in view of the remarks made above, closedness of C at each x 2 K n f^xg would imply global convergence of the algorithm. Fix x 2 K n f^xg and a sequence (x k ) in K such that x k! x. Let y k 2 C(x k ) with y k! y for some y 2 K. To prove closedness of C we have to prove that y 2 C(x). First note that continuity of the mapping x 7! W (x) on K and Condition (3) yield that B(x k )! B(x) and r(x k )! r(x) (8) as k!. From this it follows that y 2 seg(x; B(x)) necessarily. Now consider the two dierent situations that can occur. The rst situation is that (B(x k )) (x k ) + (? )r(x k ) T (B(x k )? x k ) for innitely many values of k. Letting k tend to innity along a subsequence k j where this inequality holds, we get from (8) that (B(x)) (x) + (? )r(x) T (B(x)? x) so that C(x) = fb(x)g. Moreover, along the same subsequence it follows from the denition of C that y kj = B(x kj ). Therefore, for j!, y kj! B(x) by the continuity of B. This shows that y = B(x) 2 C(x) as was to be proved. The other possibility is that for all k suciently large (B(x k )) > (x k ) + (? )r(x k ) T (B(x k )? x k ): Letting k! and using (8), it then follows that (B(x)) (x) + (? )r(x) T (B(x)? x): Therefore, according to the denition of C and the fact that y 2 seg(x; B(x)), y 2 C(x) whenever (y)? (x) 2 [(? )r(x) T (y? x); r(x) T (y? x)]: This, however, immediately follows from the fact that for all k suciently large (y k )? (x k ) 2 [(? )r(x k ) T (y k? x k ); r(x k ) T (y k? x k )]; x k! x, y k! y and r(x k )! r(x). 2 7

10 5 Example Let z < z 2 < < z n denote an ordered realization of a sample from a density g on [0; ) which is known to be convex and decreasing and dene z? = z 0 = 0. Consider the problem of estimating g from the data. This estimation problem can be found in Hampel (987). In Groeneboom & Jongbloed (995) a sieved nonparametric maximum likelihood estimator for g is dened. This estimator is dened as the maximizer of the function g 7! n n? X i=0 log g(z i ); over the class of convex decreasing densities g on [0; ) which are piecewise linear such that all the jumps in the derivative of g are concentrated at the observation points. Therefore, dening x i = g(z n?i ) for 0 i n, we see that this class of densities can be identied with the intersection of the closed convex cone K K = x 2 IR n : x 0 and x i? x i? z n?i+? z n?i x i+? x i for i n? : (9) z n?i? z n?i? in IR n and the ane subspace A ( A = x 2 IR n : 2 i= x i (z n?i+? z n?i? ) = ) : in IR n which takes into account the fact that densities integrate to one. The problem of determining the maximum likelihood estimator is therefore equivalent to the problem of determining ^x = argmin x2k\a? n i= log x i : Lemma 2 shows that this problem is equivalent to minimizing a smooth strictly convex function over the whole cone K rather than over the intersection of K with the ane subspace A. Lemma 2 Let K be a cone in IR n and the function be dened by (x) =? n i= log x i : Let c 6= 0 be a vector in IR n and A the ane subset of IR n given by A = fx 2 IR n : c T x = g for some given 6= 0. Then argmin K\A (x) = argmin (x) + K ct x : 8

11 Proof: Include the linear restriction c T x = in the object function via the Lagrangian multiplier to obtain the function (x) = (x) + (c T x? ): On K \ A this function coincides with. When ^x minimizes over K, and c T ^x =, then ^x evidently minimizes over K \ A. From the structure of together with the equality part of (2), it follows that! ^x ;i? + c i = 0; n^x ;i so that we have Therefore, it is clear that which was to be proved. According to this lemma ^x = argmin y2k Noting that x 2 K if and only if x i = i= argmin A\K? ix j= i= c T ^x = =: (x) = argmin = (x); K we see that determining ^x is equivalent to determining where (y) =? =? 8 0 < : X n i j= 8 0 < : X n i i= i= j= n log x i? 2 x i(z n?i+? z n?i?) : (z n?j+? z n?j )y j for some y 2 C + ; (0) ^y = argmin (y); y2c + 9 (z n?j+? z n?j )y j A ix =? 2 (z n?i+? z n?i? ) (z n?j+? z n?j )y j ; j= 9 (z n?j+? z n?j )y j A =? 2 y i(z 2? n?i+ z2 n?i) ; : () Figure 2 shows the maximum likelihood estimate of a convex decreasing density on [0; ) based on a sample of size n = 000 from the density g(z) = 3(? z) 2 [0;] (z) 2 9

12 computed by the modied ICM algorithm. We used the settings = 0?5, = 0:, y (0) i = 2 ( i n) and the weights w(y) i = (z n?i+? z n?i ) y i j=i nx j ( i n); where x depends on y as in (0). On a NeXTSTEP machine the algorithm stopped after 05 iterations Figure 2: Maximum likelihood estimator of the density based on a sample of size 000; the dashed curve is the underlying density. References [] Aragon, J., Eberly, D. (992). On convergence of convex minorant algorithms for distribution estimation with interval-censored data. J. of Comp. and Graph. Statist [2] Barlow, R.E., Bartholomew, R.J., Bremner, J.M. and Brunk, H.D. (972). Statistical inference under order restrictions. Wiley, New York. [3] Bazaraa, M.S., Sherali, H.D. and Shetti, C.M. (993). Nonlinear programming, theory and algorithms. Wiley, New York. [4] Dempster, A.P., Laird, N.M. and Rubin, D.B. (977). Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39? 38. 0

13 [5] Groeneboom, P. and Jongbloed, G. (995). Maximum likelihood estimation of a convex decreasing density. In preparation. [6] Groeneboom, P. and Wellner, J.A. (992). Information bounds and nonparametric maximum likelihood estimation. Birkhauser, Basel. [7] Hampel, F.R. (987). Design, modelling, and analysis of some biological data sets. In C.L. Mallows, editor, Design, data and analysis, by some friends of Cuthbert Daniel, p. -5, Wiley, New York. [8] Jongbloed, G. (995). Three statistical inverse problems. Ph.D. thesis, Delft University of Technology, The Netherlands. [9] Robertson, T., Wright, F.T. and Dykstra, R.L. (988). Order restricted statistical inference. Wiley, New York. [0] Terlaky, T. and Vial, J.Ph. (995). Maximum likelihood estimation of convex density functions. Technical Report 95-49, Department of Mathematics, Delft University of Technology. [] Wu, C.F.J. (983). On the convergence properties of the EM algorithm. Ann. Statist [2] Zangwill, W.I. (969). Nonlinear programming: a unied approach. Prentice Hall, Englewood Clis, New Jersey. [3] Zhan, Y. and Wellner, J.A. (995). Double censoring: characterization and computation of the nonparametric maximum likelihood estimator. To appear as Technical Report, Department of Statistics, University of Washington.

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

1. Introduction In many biomedical studies, the random survival time of interest is never observed and is only known to lie before an inspection time

1. Introduction In many biomedical studies, the random survival time of interest is never observed and is only known to lie before an inspection time ASYMPTOTIC PROPERTIES OF THE GMLE WITH CASE 2 INTERVAL-CENSORED DATA By Qiqing Yu a;1 Anton Schick a, Linxiong Li b;2 and George Y. C. Wong c;3 a Dept. of Mathematical Sciences, Binghamton University,

More information

Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics

Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics Potential reduction algorithms for structured combinatorial optimization problems Report 95-88 J.P. Warners T. Terlaky C. Roos B. Jansen Technische Universiteit Delft Delft University of Technology Faculteit

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

The rate of convergence of the GMRES method

The rate of convergence of the GMRES method The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Generalized continuous isotonic regression

Generalized continuous isotonic regression Generalized continuous isotonic regression Piet Groeneboom, Geurt Jongbloed To cite this version: Piet Groeneboom, Geurt Jongbloed. Generalized continuous isotonic regression. Statistics and Probability

More information

On the implementation of symmetric and antisymmetric periodic boundary conditions for incompressible flow

On the implementation of symmetric and antisymmetric periodic boundary conditions for incompressible flow On the implementation of symmetric and antisymmetric periodic boundary conditions for incompressible flow Report 9-1 Guus Segal Kees Vuik Kees Kassels Technische Universiteit Delft Delft University of

More information

(Y; I[X Y ]), where I[A] is the indicator function of the set A. Examples of the current status data are mentioned in Ayer et al. (1955), Keiding (199

(Y; I[X Y ]), where I[A] is the indicator function of the set A. Examples of the current status data are mentioned in Ayer et al. (1955), Keiding (199 CONSISTENCY OF THE GMLE WITH MIXED CASE INTERVAL-CENSORED DATA By Anton Schick and Qiqing Yu Binghamton University April 1997. Revised December 1997, Revised July 1998 Abstract. In this paper we consider

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common Journal of Statistical Theory and Applications Volume 11, Number 1, 2012, pp. 23-45 ISSN 1538-7887 A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Citation for published version (APA): van der Vlerk, M. H. (1995). Stochastic programming with integer recourse [Groningen]: University of Groningen

Citation for published version (APA): van der Vlerk, M. H. (1995). Stochastic programming with integer recourse [Groningen]: University of Groningen University of Groningen Stochastic programming with integer recourse van der Vlerk, Maarten Hendrikus IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

MATHEMATICAL PROGRAMMING I

MATHEMATICAL PROGRAMMING I MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 2002 Paper 116 Case-Control Current Status Data Nicholas P. Jewell Mark J. van der Laan Division of

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Convex Feasibility Problems

Convex Feasibility Problems Laureate Prof. Jonathan Borwein with Matthew Tam http://carma.newcastle.edu.au/drmethods/paseky.html Spring School on Variational Analysis VI Paseky nad Jizerou, April 19 25, 2015 Last Revised: May 6,

More information

IMC 2015, Blagoevgrad, Bulgaria

IMC 2015, Blagoevgrad, Bulgaria IMC 05, Blagoevgrad, Bulgaria Day, July 9, 05 Problem. For any integer n and two n n matrices with real entries, B that satisfy the equation + B ( + B prove that det( det(b. Does the same conclusion follow

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Math Camp Notes: Everything Else

Math Camp Notes: Everything Else Math Camp Notes: Everything Else Systems of Dierential Equations Consider the general two-equation system of dierential equations: Steady States ẋ = f(x, y ẏ = g(x, y Just as before, we can nd the steady

More information

Some Notes On Rissanen's Stochastic Complexity Guoqi Qian zx and Hans R. Kunsch Seminar fur Statistik ETH Zentrum CH-8092 Zurich, Switzerland November

Some Notes On Rissanen's Stochastic Complexity Guoqi Qian zx and Hans R. Kunsch Seminar fur Statistik ETH Zentrum CH-8092 Zurich, Switzerland November Some Notes On Rissanen's Stochastic Complexity by Guoqi Qian 1 2 and Hans R. Kunsch Research Report No. 79 November 1996 Seminar fur Statistik Eidgenossische Technische Hochschule (ETH) CH-8092 Zurich

More information

ARE202A, Fall Contents

ARE202A, Fall Contents ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Invariant Sets for a Class of Hybrid Systems Mats Jirstrand Department of Electrical Engineering Linkoping University, S-581 83 Linkoping, Sweden WWW: http://www.control.isy.liu.se Email: matsj@isy.liu.se

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Wei Pan 1. the NPMLE of the distribution function for interval censored data without covariates. We reformulate the

Wei Pan 1. the NPMLE of the distribution function for interval censored data without covariates. We reformulate the Extending the Iterative Convex Minorant Algorithm to the Cox Model for Interval-Censored Data Wei Pan 1 1 The iterative convex minorant (ICM) algorithm (Groeneboom and Wellner, 1992) is fast in computing

More information

Further experiences with GMRESR

Further experiences with GMRESR Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics

More information

Rough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland.

Rough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland. Rough Sets, Rough Relations and Rough Functions Zdzislaw Pawlak Institute of Computer Science Warsaw University of Technology ul. Nowowiejska 15/19, 00 665 Warsaw, Poland and Institute of Theoretical and

More information

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12 Analysis on Graphs Alexander Grigoryan Lecture Notes University of Bielefeld, WS 0/ Contents The Laplace operator on graphs 5. The notion of a graph............................. 5. Cayley graphs..................................

More information

over the parameters θ. In both cases, consequently, we select the minimizing

over the parameters θ. In both cases, consequently, we select the minimizing MONOTONE REGRESSION JAN DE LEEUW Abstract. This is an entry for The Encyclopedia of Statistics in Behavioral Science, to be published by Wiley in 200. In linear regression we fit a linear function y =

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G.

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G. In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and J. Alspector, (Eds.). Morgan Kaufmann Publishers, San Fancisco, CA. 1994. Convergence of Indirect Adaptive Asynchronous

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts.

Adaptive linear quadratic control using policy. iteration. Steven J. Bradtke. University of Massachusetts. Adaptive linear quadratic control using policy iteration Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke@cs.umass.edu B. Erik Ydstie Department of Chemical

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Expectation Maximization Segmentation Niclas Bergman Department of Electrical Engineering Linkoping University, S-581 83 Linkoping, Sweden WWW: http://www.control.isy.liu.se Email: niclas@isy.liu.se October

More information

Quantum logics with given centres and variable state spaces Mirko Navara 1, Pavel Ptak 2 Abstract We ask which logics with a given centre allow for en

Quantum logics with given centres and variable state spaces Mirko Navara 1, Pavel Ptak 2 Abstract We ask which logics with a given centre allow for en Quantum logics with given centres and variable state spaces Mirko Navara 1, Pavel Ptak 2 Abstract We ask which logics with a given centre allow for enlargements with an arbitrary state space. We show in

More information

Discrete (and Continuous) Optimization WI4 131

Discrete (and Continuous) Optimization WI4 131 Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl

More information

Likelihood Ratio Tests and Intersection-Union Tests. Roger L. Berger. Department of Statistics, North Carolina State University

Likelihood Ratio Tests and Intersection-Union Tests. Roger L. Berger. Department of Statistics, North Carolina State University Likelihood Ratio Tests and Intersection-Union Tests by Roger L. Berger Department of Statistics, North Carolina State University Raleigh, NC 27695-8203 Institute of Statistics Mimeo Series Number 2288

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

z = f (x; y) f (x ; y ) f (x; y) f (x; y ) BEEM0 Optimization Techiniques for Economists Lecture Week 4 Dieter Balkenborg Departments of Economics University of Exeter Since the fabric of the universe is most perfect, and is the work of a most

More information

Chernoff s distribution is log-concave. But why? (And why does it matter?)

Chernoff s distribution is log-concave. But why? (And why does it matter?) Chernoff s distribution is log-concave But why? (And why does it matter?) Jon A. Wellner University of Washington, Seattle University of Michigan April 1, 2011 Woodroofe Seminar Based on joint work with:

More information

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D Berlin

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D Berlin Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D-14195 Berlin Georg Ch. Pug Andrzej Ruszczynski Rudiger Schultz On the Glivenko-Cantelli Problem in Stochastic Programming: Mixed-Integer

More information

On Coarse Geometry and Coarse Embeddability

On Coarse Geometry and Coarse Embeddability On Coarse Geometry and Coarse Embeddability Ilmari Kangasniemi August 10, 2016 Master's Thesis University of Helsinki Faculty of Science Department of Mathematics and Statistics Supervised by Erik Elfving

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

Proposition 5. Group composition in G 1 (N) induces the structure of an abelian group on K 1 (X):

Proposition 5. Group composition in G 1 (N) induces the structure of an abelian group on K 1 (X): 2 RICHARD MELROSE 3. Lecture 3: K-groups and loop groups Wednesday, 3 September, 2008 Reconstructed, since I did not really have notes { because I was concentrating too hard on the 3 lectures on blow-up

More information

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory Hacettepe Journal of Mathematics and Statistics Volume 46 (4) (2017), 613 620 Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory Chris Lennard and Veysel Nezir

More information

Auerbach bases and minimal volume sufficient enlargements

Auerbach bases and minimal volume sufficient enlargements Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Dynamical Systems. August 13, 2013

Dynamical Systems. August 13, 2013 Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Linear Discrimination Functions

Linear Discrimination Functions Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach

More information

THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS

THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS THE NEWTON BRACKETING METHOD FOR THE MINIMIZATION OF CONVEX FUNCTIONS SUBJECT TO AFFINE CONSTRAINTS ADI BEN-ISRAEL AND YURI LEVIN Abstract. The Newton Bracketing method [9] for the minimization of convex

More information

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang

Pointwise convergence rate for nonlinear conservation. Eitan Tadmor and Tao Tang Pointwise convergence rate for nonlinear conservation laws Eitan Tadmor and Tao Tang Abstract. We introduce a new method to obtain pointwise error estimates for vanishing viscosity and nite dierence approximations

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems. Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution

More information

Introduction to Convex Analysis Microeconomics II - Tutoring Class

Introduction to Convex Analysis Microeconomics II - Tutoring Class Introduction to Convex Analysis Microeconomics II - Tutoring Class Professor: V. Filipe Martins-da-Rocha TA: Cinthia Konichi April 2010 1 Basic Concepts and Results This is a first glance on basic convex

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

On the projection onto a finitely generated cone

On the projection onto a finitely generated cone Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for

More information

Zangwill s Global Convergence Theorem

Zangwill s Global Convergence Theorem Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract A Finite Element Method for an Ill-Posed Problem W. Lucht Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D-699 Halle, Germany Abstract For an ill-posed problem which has its origin

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r Intervalwise Receding Horizon H 1 -Tracking Control for Discrete Linear Periodic Systems Ki Baek Kim, Jae-Won Lee, Young Il. Lee, and Wook Hyun Kwon School of Electrical Engineering Seoul National University,

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Matematicas Aplicadas. c1998 Universidad de Chile A CONVERGENT TRANSFER SCHEME TO THE. Av. Ejercito de Los Andes 950, 5700 San Luis, Argentina.

Matematicas Aplicadas. c1998 Universidad de Chile A CONVERGENT TRANSFER SCHEME TO THE. Av. Ejercito de Los Andes 950, 5700 San Luis, Argentina. Rev. Mat. Apl. 19:23-35 Revista de Matematicas Aplicadas c1998 Universidad de Chile Departamento de Ingeniera Matematica A CONVERGENT TRANSFER SCHEME TO THE CORE OF A TU-GAME J.C. CESCO Instituto de Matematica

More information

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

More information

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear

More information

f = 2 x* x g 2 = 20 g 2 = 4

f = 2 x* x g 2 = 20 g 2 = 4 On the Behavior of the Gradient Norm in the Steepest Descent Method Jorge Nocedal y Annick Sartenaer z Ciyou Zhu x May 30, 000 Abstract It is well known that the norm of the gradient may be unreliable

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

Introduction to Real Analysis

Introduction to Real Analysis Introduction to Real Analysis Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Sets Sets are the basic objects of mathematics. In fact, they are so basic that

More information

Werner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying

Werner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying Stability of solutions to chance constrained stochastic programs Rene Henrion Weierstrass Institute for Applied Analysis and Stochastics D-7 Berlin, Germany and Werner Romisch Humboldt University Berlin

More information

Lecture 1. Toric Varieties: Basics

Lecture 1. Toric Varieties: Basics Lecture 1. Toric Varieties: Basics Taras Panov Lomonosov Moscow State University Summer School Current Developments in Geometry Novosibirsk, 27 August1 September 2018 Taras Panov (Moscow University) Lecture

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

University of California. Berkeley, CA fzhangjun johans lygeros Abstract

University of California. Berkeley, CA fzhangjun johans lygeros Abstract Dynamical Systems Revisited: Hybrid Systems with Zeno Executions Jun Zhang, Karl Henrik Johansson y, John Lygeros, and Shankar Sastry Department of Electrical Engineering and Computer Sciences University

More information

ON THE DIAMETER OF THE ATTRACTOR OF AN IFS Serge Dubuc Raouf Hamzaoui Abstract We investigate methods for the evaluation of the diameter of the attrac

ON THE DIAMETER OF THE ATTRACTOR OF AN IFS Serge Dubuc Raouf Hamzaoui Abstract We investigate methods for the evaluation of the diameter of the attrac ON THE DIAMETER OF THE ATTRACTOR OF AN IFS Serge Dubuc Raouf Hamzaoui Abstract We investigate methods for the evaluation of the diameter of the attractor of an IFS. We propose an upper bound for the diameter

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA Nonlinear Observer Design using Implicit System Descriptions D. von Wissel, R. Nikoukhah, S. L. Campbell y and F. Delebecque INRIA Rocquencourt, 78 Le Chesnay Cedex (France) y Dept. of Mathematics, North

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion.

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion. The Uniformity Principle A New Tool for Probabilistic Robustness Analysis B. R. Barmish and C. M. Lagoa Department of Electrical and Computer Engineering University of Wisconsin-Madison, Madison, WI 53706

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Transaction E: Industrial Engineering Vol. 16, No. 1, pp. 1{10 c Sharif University of Technology, June 2009 Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Abstract. K.

More information

The extreme points of symmetric norms on R^2

The extreme points of symmetric norms on R^2 Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2008 The extreme points of symmetric norms on R^2 Anchalee Khemphet Iowa State University Follow this and additional

More information

Optimal maintenance decisions over bounded and unbounded horizons

Optimal maintenance decisions over bounded and unbounded horizons Optimal maintenance decisions over bounded and unbounded horizons Report 95-02 Jan M. van Noortwijk Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

Error Empirical error. Generalization error. Time (number of iteration)

Error Empirical error. Generalization error. Time (number of iteration) Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp

More information