Nonlinear Transformations for Simplifying Optimization Problems
|
|
- Kenneth Mills
- 5 years ago
- Views:
Transcription
1 Journal of Global Optimization manuscript No. (will be inserted by the editor) Nonlinear Transformations for Simplifying Optimization Problems Elvira Antal Tibor Csendes Received: date / Accepted: date Abstract A new implementation of a symbolic simplification algorithm is presented together with extended theoretical results that support the underlying method for constrained nonlinear optimization problems. The program is capable to automatically recognize simplification transformations, and to detect implicit redundancy in the objective function or in the constraint functions. We have also discussed possibilities to improve the lower and upper bounds obtained by interval arithmetic based inclusion functions. We report computational test results obtained for standard global optimization test problems, and for several other test instances, involving artificially constructed difficult to simplify cases. We also compare the new implementation with a Maple based one reported earlier. Keywords nonlinear optimization symbolic computation interval inclusion reformulation rewriting Mathematica 1 Introduction Applications of symbolic techniques for preprocessing or solving optimization problems is a promising emerging field of mathematical programming. Symbolic preprocessing of LP problems [9] is the most classic example, this kind of transformations are implemented in the AMPL processor several years ago as an automatic presolving mechanism [6]. A more recent topic is the assistance of IP/MINLP solvers, which is the aim of the Reformulation-Optimization Software Engine of Liberti et al. [8]. As integer programming is a very problematic field, the relaxation of some constraints or increasing the dimension of the problem could be reasonable to achieve solvability. E. Antal and T. Csendes University of Szeged, Institute of Informatics H-6720 Szeged, Árpád tér 2, Hungary Tel.: Fax: csendes@inf.u-szeged.hu
2 2 Elvira Antal, Tibor Csendes However, as the work of Csendes and Rapcsák [5,11] shows, it is also possible to produce equivalent forms of an unconstrained nonlinear optimization problem automatically by symbolic transformations, while a bijective transformation between the optima of the problem forms is constructed. This method is also capable to eliminate redundant variables or simplify the problem in other ways. An interesting motivation was a parameter estimation problem [3]. The underlying Otis model of respiratory mechanics was earlier widely studied. The model utilizes the properties of the particular electric model. It was proved that the model function has a redundant parameter. The objective function has the least-squares form, but we now concentrate on the model function Z m(s) = Ds3 + Cs 2 + Bs + 1 Gs 2 (1) + F s where s = jω is a constant depending only on the measuring place. The following transformations provide the respective simplification transformation steps as well. Notice that regarding the original optimization variables R C, C 1, C 2, R 1, and R 2, no suitable transformations exist as Theorems 1 and 2 would require. B = R C (C 1 + C 2 ) + R 1 C 1 + R 2 C 2, (2) C = I C (C 1 + C 2 ) + [R C (R 1 + R 2 ) + R 1 R 2 ]C 1 C 2, (3) D = I C (R 1 + R 2 )C 1 C 2, (4) F = C 1 + C 2, (5) G = (R 1 + R 2 )C 1 C 2. (6) The new form based on the new variables of B to G is obviously simpler than the original one formulated with one more variables. This finding surprised the experts of the related application field, pulmonology. In this paper, the theory of rewriting nonlinear optimization problems will be extended in some directions. We discuss the necessary properties of coordinate transformations for the simplification of unconstrained and constrained nonlinear optimization problems, and we present the novel idea of improving interval inclusion functions by automatic rewriting. Our earlier Maple implementation [2] is compared to the new Mathematica version. 2 Simplification of unconstrained nonlinear optimization problems As one of the generalization directions, we extend the theoretical statements of the papers of Csendes and Rapcsák [5, 11] to include constraints in the nonlinear optimization problems that will be automatically simplified: min f(x) x R n c i (x) = 0 (7) c j (x) 0, where f(x) : R n R and c i (x), c j (x) : R n R are smooth real functions, i = 1,..., p 1 and j = p 1 + 1,..., p 2 are integers. For our purposes, these functions must be given by a formula, a symbolic expression. Expression denotes
3 Nonlinear Transformations for Simplifying Optimization Problems 3 here a well-formed, finite combination of symbols (constants, variables, operators, functions, and brackets), usually implemented in computer algebra systems as directed acyclic graphs. Any real constants or any function names could be present. In many cases our technique is capable to simplify more complex functions than just rational expressions. We shall transform the problem (7) to another one that is better in the following senses: it has fewer arithmetic operations to execute during evaluation, the dimension of the problem is less, or it is simpler to solve. We are also interested in finding relationships among variables if the functions in the constrained nonlinear optimization problem are redundant. In other words, we plan to recognize redundant model parameters. Denote by X the set of variable symbols in f(x), and in the constraint functions c i (x), c j (x), i = 1,..., p 1 and j = p 1 +1,..., p 2. Y will be the set of variable symbols in the transformed functions g(y), d i (y), d j (y), i = 1,..., p 1 and j = p 1 + 1,..., p 2, respectively. The transformed constrained optimization problem will be then min g(y) y R m d i (y) = 0 (8) d j (y) 0, where g(y) : R m R is the transformed form of f(x) and d i (y), d j (y) : R m R are again smooth real functions, the transformations of the c i (x), c j (x) constraints, i = 1,..., p 1 and j = p 1 + 1,..., p 2. Denote the set of the expressions of the original constraint functions by C: C = {c i : i = 1,..., p 2 }, D will be the set of the expressions of the transformed constraints D = {d i : i = 1,..., q 2 }. Denote by F the expression set of all subexpressions (all well-formed expressions, that are part of the original expression) of f(x) and c i (x) C. We call a variable v V covered by the set of subexpressions H in a function, if for all appearances of v in that function there exists a subexpression h H such that a suitable surrounding computational subtree of this particular occurrence of v calculates h. The crucial detail of our algorithm will be the transformation step. If every occurrence of the V X variables is covered by the H F expression set, and H V, then substitute a new variable y i in place of h i for all h i H in f and also in every c i C. We call this manipulation as the substitutions related to H. Furthermore, let us update the variable set of the transformed problem: Y := (Y y i ) \ V. This will be referred to as a transformation step (corresponding to H). The H = V = 1, p 1 = p 2 = 0 special case belongs to the algorithm given by Csendes and Rapcsák [5] for the unconstrained case. The related theorems given there were the following two.
4 4 Elvira Antal, Tibor Csendes Theorem 1 If h(x) is smooth and strictly monotonic in x i, then the corresponding transformation simplifies the function in the sense that each occurrence of h(x) in the expression of f(x) is padded by a variable in the transformed function g(y), while every local minimizer (or maximizer) point of f(x) is transformed to a local minimizer (maximizer) point of the function g(y). Theorem 2 If h(x) is smooth, strictly monotonic as a function of x i, and its range is equal to R, then for every local minimizer (or maximizer) point y of the transformed function g(y) there exists an x such that y is the transform of x, and x is a local minimizer (maximizer) point of f(x). The definitions local reformulation and symmetric local reformulation given by Leo Liberti in [7] and motivated by MINLP problems, are analogous to the transformations of the theorems above. The present paper generalizes the theoretical results for unconstrained global optimization [5,11] to cover constrained global optimization problems, and to allow more substitutions in parallel. Consider now the constrained global optimization problem (7). The key to our computational procedure is that it is possible to automatically recognize suitable subexpressions of the original functions for simplification transformation. The following assertion is a straightforward generalization of Assertion 1 in [5]. Assertion 1 If a variable x i appears only in one term everywhere in the expressions of the smooth functions f(x) and c i (x), i = 1,..., p 2, namely in h(x), then the partial derivatives of these functions related to x i all can be written in the form ( h(x)/ x i )p(x), where p(x) is continuously differentiable. Consider the function x 1 + sin(x 2 ) + x 3, this allows the possible substitutions of y 1 = x 1 + sin(x 2 ) and y 2 = sin(x 2 ) + x 3, but only one of these can be applied. We call this phenomenon overlapping. Obviously, not all possible combinations of subexpressions can be applied for simplification substitutions. We call a set H of expressions proper if the expressions h i H are such that they can be substituted by new variables at the same time, i.e. these expressions does not overlap in the respective functions of f and c i (x), i = 1,..., p 2. Without this property, not all the substitutions related to the expression set H could be executed. The following theoretical results provide sufficient preconditions of a successful simplification step. Theorem 3 If H is proper and all the h i H expressions are smooth and strictly monotonic as a function of a v j V variable, and for all h i its domain Dom(h i ) = R, then the transformation steps y i = h i (x) for all h i H simplify the original problem in such a way, that every local minimizer (maximizer) point x of the original problem is transformed to a local minimizer (maximizer) point y of the transformed problem. Proof Consider first the sets of feasibility for the two problems. It is obvious that the substitution equations ensure that if a point x was feasible for the problem (7), then it remains feasible after the transformations for the new, simplified problem (8) as well. The same is true for infeasible points.
5 Nonlinear Transformations for Simplifying Optimization Problems 5 Denote now a local minimizer point of f by x, and let y be the transformed form of x : y = H(x ) (in the same way y i = H(x i )). As each h i H is strictly monotonic in at least one variable, all x i points from the a = N(x, δ) neighborhood of x will be transformed into a b = N(y, ϵ) neighborhood of y, and x j / a : y j / b. Both the objective functions f and g, and the old and transformed constraint functions will have the same value before and after the transformation. This fact ensures that each local minimizer point x will be transformed into a local minimizer point y of g(y). The same is true for local maximizer points, by similar reasoning. Additionally, H V, so the construction of the transformation step ensures, that we delete at least as many x i variables from the optimization model applying H, as the number of the new variables entering it. In contrast to Theorem 3, which gives sufficient conditions to have such a simplification transformation that will bring local minimizer points of the old problem to local minimizer points of the new one, the following theorem provides sufficient conditions to have a one-to-one mapping of the minimizer points. Theorem 4 If H is proper and all the h i H expressions are smooth, strictly monotonic as a function of a v j V variable, and for all h i its domain and range Dom(h i ) = R(h i ) = R, then the transformation steps y i = h i (x) for all h i H simplify the original problem in such a way, that every local minimizer (maximizer) point y of the transformed problem is transformed back to a local minimizer (maximizer) point x of the original problem. Proof Since the substitution equations preserve the values of the constraint functions, each point y of the set of feasibility for the transformed problem (8) must be mapped from a feasible point x of the original problem (7): y = H(x). The same holds for infeasible points. Denote a local minimizer point of (8) by y. Now, since the range of the transformation function H is the whole set of real numbers, every local minimizer point y of the transformed problem (8) is necessarily a result of transformation: let x be the back-transformed point: y = H(x ). Consider a neighborhood N(y, δ) of y, where every feasible point y has a greater or equal function value: g(y) g(y ), and those feasible points x of (7), for which H(x) = y N(y, δ). The latter set may be infinite if the simplification transformation decreased the dimensionality of the optimization problem. Anyway, there is a suitable neighborhood N(x, δ ) of x inside this set, for which obviously x N(x, δ ) that satisfies the constraints of (7) the relation f(x) f(x ) holds. The argument for local maximizers is similar. Remark that in case of Theorems 3 and 4, the dimensionality of the original constrained global optimization problem can be decreased, or at least it is not increased. In some cases, the strict monotonicity and the full range of the related substituted subexpressions (that are equal to R) can be changed to the invertibility of these expressions. The question is whether this can be ensured in a way. The strict monotonicity and the full range can certainly be checked automatically within a symbolic algebra system, such as Maple or Mathematica. The Corollary 1 is an immediate consequence of Theorem 3.
6 6 Elvira Antal, Tibor Csendes Algorithm 2.1 Simplification of constrained global optimization problems 1. Compute the gradients of the objective and constraint functions 2. Factorize their partial derivatives 3. Determine the substitutable subexpressions fulfilling the conditions of Theorem 4 and substitute them: 4. If the factorization was successful, then explore the list of subexpressions for substitutable ones, which can be obtained by integration of the factors 5. If the factorization was not possible, then explore the list of subexpressions for substitutable ones, which are linear in the related variables 6. Solve the simplified problem if possible, and give the solution of the original problem by inverse transformation. Corollary 1 If H is proper and every h i H expressions are smooth and invertible as a function of a v j variable in V, then the transformation steps y i = h i (x) for all h i H simplify the original problem in such a way, that every local optimum point x of the original problem is transformed to a local optimum point y of the transformed problem. The computational procedure based on the above theoretical results is described by Algorithm 2.1. This was implemented and tested, and the results are discussed in the next section with the exception of the parallel substitution of multiple expressions, the actual solution of the simplified problem and its back transformation (steps 3 and 6). As a natural extension of the present investigations, symbolic reformulations are promising for speeding up interval methods of global optimization. The amount of overestimation for interval arithmetic [1] based inclusion functions were investigated in optimization models [13]. Symbolic transformations seem to be appropriate for a proper reformulation. Obvious improvement possibilities are the utilization of the application of the square function instead of the usual multiplication (where it is suitable), the transformation along the subdistributivity law, and finding SUE forms (Single Use Expressions). In fact, these kind of transformations are usually performed by the default expression simplification mechanism [12] of an ordinary computer algebra system. The domain of calculation has an important role in this presolve approach, since important features of functions such as monotonicity change substantially within the domain where a function is defined. 3 Implementation in Mathematica Compared to our first try with the implementation in the computer algebra system Maple [2], the new platform Mathematica has several advantages. First of all, the substitution procedures are much better, as the Mathematica programming language is based on term-rewriting. It has also a better interval arithmetic implementation: this was crucial for quick and reliable range calculation on the expressions to be substituted. Our new program supports the enumeration of possible substitutions and it still keeps up with the simpler Maple version in time and computation efficiency, due to the application of adequate programming techniques and due to some nice
7 Nonlinear Transformations for Simplifying Optimization Problems 7 Algorithm 3.1 symbsimp(x, f [, y]) // f: function formula, x: variable list [, y: symbol for introducing new variables] glist {f} varlist x subslist {} for g glist do gindex position of g in glist x varlist(gindex) for i = 1 to Dimension(x) do dx g factordx Factor(dx) h list Null for j factordx do p j dx i if CharacterizeAllTest(g, p) then repeat h temp Sort({q q Decompose(g, p) and IsType(q, + )}, p).pop if Test(h temp, x i ) then h list.push(h temp) end if until Test(h temp, x i ) or h temp = Null end if end for h list2 Null repeat h temp Sort({q q Decompose(g, x i ) and IsLinear(q, x i )}, x i ).Pop if Test(h temp, x i ) then h list2.push(h temp) end if until h temp = Null if Length(h list h list2) > 0 then for h i (h list h list2) do g Subs(g, h i, y i ) glist glist g subslist(gindex) subslist(gindex) h i varlist(gindex) (varlist(gindex) \ x i ) y i end for end if end for end for properties of the Mathematica system, such as automatic parallelization on list operations. Let us mention, that Mathematica tends to include more and more expert tools to ensure the possibility of writing efficient program code. For example, Mathematica provides the possibility to utilize the parallel computing abilities of the graphics processing unit (GPU) using CUDA or OpenCL technology since The present implementation was tested only on unconstrained nonlinear optimization problems, so we can compare the results of the earlier Maple and the new Mathematica program. The pseudo-code of the new implementation is given in Algorithm 3.1. We follow here the notation of the earlier publication [2] and the conventions as far as possible. However, we should mention, that Mathematica supports the declarative paradigm much more, than imperative programming, so we actually did not need to use many for loops or similar constructions.
8 8 Elvira Antal, Tibor Csendes Some technical details: Subs(expr,x,y) replaces x with y in the expression expr. We have written the Subs function, the specialized substitution routine in about 50 program lines. A dozen of (delayed) rules are introduced and we have combined four different ways for evaluating them, using the expanded, simplified, and also the factorized form of the formula. It is probably the most important part of the program, as simple refinements could have a substantial influence on the result quality of the whole simplification process. CharacterizeAllTest(expr,y) examines, whether the expr expression characterizes all occurrence of the y variable (or expression), that is, y could be removed totally from the optimization problem by substituting expr by a new variable. The CharacterizeAllTest function also uses the new Subs function. Test(expr,y) gives the logical value true, if CharacterizeAllTest(expr,y) and the range of expr is equal to R. We calculate naive interval inclusion for the enclosure of the ranges by the standard range arithmetic of Mathematica. In this implementation we have not found erroneous range calculations. Decompose(expr,y) provide a list of the part expressions of expr that contain y. The other functions in Algorithm 3.1 work as their function names suggest. 3.1 Improvements compared to the Maple version Illustration of the algorithm run on the Rosenbrock function The objective function of the unconstrained problem to be minimized is ( ) 2 f(x) = 100 x 2 1 x 2 + (1 x1 ) 2. We can run the simplifier algorithm with similar procedure calls: Maple: Mathematica: symbsimp([x2, x1], 100*(x1^2-x2)^2+(1-x1)^2); SymbSimp[{x1, x2}, 100 (x1^2-x2)^2+(1-x1)^2]; The program needs at least 2 input parameters: the list of variables, and the formula of the objective function. In the Maple version we choose the list representation instead of a set for the variables, as the order of the variables could influence the results to some extent, because of the weak substitution capabilities of Maple. In the case of Mathematica, the command seems to be the same. The syntax is a bit different, as we changed the round brackets to square brackets, the square brackets to curly ones, and we left out the asterisk for multiplication. However, we can use pure functions for the formula in Mathematica, so we can change the name of the variables much more easily. Also, we could define in the third parameter, what names we would like to introduce in the transformed function, while in Maple, we fixed them to y 1, y 2,....
9 Nonlinear Transformations for Simplifying Optimization Problems 9 In the first step, the algorithm determines the partial derivatives of f(x), then the factorized forms of the partial derivatives are computed: Maple: Mathematica: factor(dx(1)) = 200x x 2, factor(dx(2)) = 400x x 1 x x 1, {{2( 1 + x x x 1 x 2 )}, { 200(x 2 1 x 2 )}}. Remember, that in Maple we gave a reverse order for the variables in the first input parameter, so the second line refers to the partial derivative with respect to x 1. One can see that Mathematica is a bit more convenient regarding factorization in this case. We use list representation for almost everything in the new, Mathematica version, as it ensures quicker execution, and automatic parallelization. Accordingly, we can enumerate all possibilities in parallel at every step. In the following, we compare the two possible threads of the algorithm on the same example, as Mathematica could complete some factorization, while Maple could not. Maple algorithm step if the factorization was not possible, then explore the subexpressions that are linear in the related variables : {100(x 2 1 x 2 ) 2, (x 2 1 x 2 ) 2, x 2 1 x 2, x 2, x 2, (1 x 1 ) 2, x 2 1, 100, 2, 1} The underlined expressions are those that fulfill the range requirement. Choose the first one from the decreasing complexity-ordered list. Mathematica algorithm step if the factorization was successful, then explore the subexpressions that can be obtained by integration of the factors : {{1 2x x x 2 1(1 200x 2 ) + 100x 2 2, x 1, ( 1 + x 1 ) 2, 1 x 1 }, { x 2, x 2 1 x 2 }} Choose those subexpressions, that characterize all occurrences of the given variable. Underlining highlights again those expressions that fulfil the requirements, and hence the related transformation preserves all the local extrema. It is nice, that we obtain the same result on both ways. The programs choose the same substitution: y 1 = x 2 1 x 2. The transformed function at this phase is g = 100y (1 x 1 ) 2. Now compute again the partial derivatives and their factorization: Maple: factor(dx(1)) = dx(1) = 200 y 1,
10 10 Elvira Antal, Tibor Csendes factor(dx(2)) = dx(2) = 2 + 2x 1. Mathematica: {{2( 1 + x 1 )}, {200 y 1 }}. Follow the same train of thought to get the second possible substitution: y 2 = 1 x 1. The final simplified function, that our automatic simplifier methods produced both in Maple and Mathematica: g = 100y y Improvements on some new test cases In the following we show first some simple examples to demonstrate the advantages of our new Mathematica program compared to the Maple version. These examples were generated by us, and were the problematic ones out of our self-made collection in the test phase of the Maple implementation, so it was trivial to use them to test the new tool. See Table 1 for the Maple based results and Table 2 for the second set, for the results obtained by the Mathematica version. For the not discussed test cases, the two implementations gave the same output. We apply again the codes from [2] that describe the quality of the obtained results compared to the solvability of the given problem instance. These codes appear in the Problem type and Result type column of the tables. For the actual problem: A. some simplifying transformations are possible according to the presented theory, B. simplifying transformations are possible with the extension of the presented theory, C. some useful transformations could be possible with the extension of the presented theory, but they do not necessarily simplify the problem (e.g. since they increase the dimensionality), D. we do not expect any useful transformation. The obtained results are described by the second indicator: our program produced proper substitutions, 2. no substitutions, 3. incorrect substitutions. The last code explains the reason: the improper result is due to not perfect... a. algebraic substitution, b. range calculation.
11 Nonlinear Transformations for Simplifying Optimization Problems 11 As we had discussed earlier [2], Maple provides incorrect interval arithmetic, so we needed to use heuristics in the Maple implementation for range calculation. Also the algebraic substitution capabilities of Maple are weak. We could originate almost all mistake of the early implementation in this two reason, so we introduced a and b for characterize Maple errors in the paper of Antal et al. [2]. We use code 2 also when only a constant multiplier is eliminated. In short, A1 means that a proper substitution is possible and it has been found by the algorithm, while D2 has the meaning that as far as we know, no substitution is possible, and this was the conclusion of the program as well. The unsuccessful cases are those denoted by other codes. The differences of the obtained results are explained next in detail: Sin2 test problem We obtained a proper simplification with the new implementation, while the old one was unable to simplify it. Exp1-2 test problems A better range calculation was produced by the Mathematica version of our algorithm. Sq1-2 test problems The Maple implementation was not able to recognize the subexpression x 1 x 2 in the expression x 2 1x 2 2, but was able to recognize x 1 x 2 + x 3 in its square. The reason for that was that this is not a multiplication type expression but a sum, and it will be stored together in the representing computation tree of the expression. Originally the same happened for the Mathematica implementation, however we could use regular expressions to produce better substitutions, and it works well for this problem. In other words, we can extend the possibilities of the basic substitution routine in the Mathematica based algorithm. SqCos1 test problem The new, Mathematica based method applies a too conservative routine to check whether the substitution expression is monotonous, this is why the still possible substitution of x 1 x 2 is missing from those listed. Still the problem could be simplified and solved. SqExp2-3 test problems In these cases, the problem is again a weakness of the expression matching capability of Maple on the standard representation of an expression Computational test results on global optimization problems We have tested our new implementation on standard and frequently used global optimization test problems (Table 3). We extended the test set compared to our earlier paper [2]. In the common cases, most of our new results are identical to what we have obtained with the earlier, Maple based implementation. The two differences are reported here. Schwefel-227 test problem The old version in Maple gave the y 1 = x x 2 2 2x 1 substitution. This expression characterizes all occurrences of x 2, but it is not monotonic in any variable, the Mathematica version had not suggested it for substitution.
12 12 Elvira Antal, Tibor Csendes Table 1 Results of some synthetic tests solved by our old, Maple based implementation ID Function f Function g Substitutions Problem type Result type Sin2 2x 3 sin(2x 1 + x 2 ) 2y 1 y 1 = A 3 x 3 sin(2x 1 + x 2 ) Exp1 e x 1+x 2 e y 1 y 1 = x 1 + x 2 A 1 Exp2 2e x 1+x 2 2y 1 y 1 = e x 1+x 2 A 3b Sq1 x 2 1 x2 2 none none A 2a Sq2 (x 1 x 2 + x 3 ) 2 y1 2 y 1 = x 1 x 2 + x 3 A 1 SqCos1 (x 1 x 2 + x 3 ) 2 y3 2 cos(y 1) y 1 = x 1 x 2, y 3 = A 1 cos(x 1 x 2 ) y 1 + x 3 SqExp2 (x 1 + x 2 ) 2 +2e 1 e x 1+x 2 y e1 e y 1 y 1 = x 1 + x 2 A 1 SqExp3 (x 1 + x 2 ) 2 none none A 2a +2e 1+x 1+x 2 CosExp cos(e x 1 + x 2 ) + cos(x 2 ) cos(y 1 ) + cos(y 2 ) y 1 = e x 1 + x 2, y 2 = x 2 A 1 Table 2 Results of some synthetic tests solved by our new, Mathematica based implementation ID Function f Function g Substitutions Problem type Result type Sin2 2x 3 sin(2x 1 + x 2 ) 2x 3 sin(y 1 ) y 1 = 2x 1 + x 2 A 1 Exp1 e x 1+x 2 e y 1 y 1 = x 1 + x 2 A 1 Exp2 2e x 1+x 2 2e y 1 y 1 = x 1 + x 2 A 1 Sq1 x 2 1 x2 2 y1 2 y 1 = x 1 x 2 A 1 Sq2 (x 1 x 2 + x 3 ) 2 y1 2 y 1 = x 1 x 2 + x 3 A 1 SqCos1 (x 1 x 2 + x 3 ) 2 y1 2 cos(x 1x 2 ) y 1 = x 1 x 2 + x 3 A 1 cos(x 1 x 2 ) SqExp2 (x 1 + x 2 ) 2 y e1+y 1 y 1 = x 1 + x 2 A 1 +2e 1 e x 1+x 2 SqExp3 (x 1 + x 2 ) 2 y e1+y 1 y 1 = x 1 + x 2 A 1 +2e 1+x 1+x 2 CosExp cos(e x 1 + x 2 ) + cos(x 2 ) cos(y 1 ) + cos(x 2 ) y 1 = e x 1 + x 2 A 1 Schwefel-32 (n=2) test problem Mathematica found a good substitution, while Maple didn t. All the transformations were performed with Mathematica 9.0 under time constraint, on the same computer, what is described in Section 4. In those cases, where the complete simplifier program did not stop within 1800 seconds, we wrote Out of time. message to the Substitutions column of Table 3. Our observations show that the majority of the running time is consumed in the Subs(expr,x,y) function. We examined 45 well-known global optimization test problems, and our Mathematica program offered equivalent transformations in 9 cases. In other words, our method suggested some simplification for 20% of this extended standard global optimization test set. The next section presents numerical results to demonstrate,
13 Nonlinear Transformations for Simplifying Optimization Problems 13 that these transformations could be useful for a conventional global optimization solver. 4 Numerical verification of the usefulness of the discussed transformations This section presents numerical tests, that were executed to verify the usefulness of the successful transformations given in Table 2 and 3. We minimized the original and the transformed problem forms with a conventional global optimization solver in every cases of Table 2 and 3, when our algorithm produced an equivalent transformation. The quality and efficiency indicators as the reached global optima values, running times and the number of function evaluation are presented in Table 4, 5, and 6. We performed 100 independent runs for every test cases with the Matlab implementation of a general stochastic multi-start solver with a quasi-newton type local search method, namely Global with the BFGS local search [4]. The numerical tests ran on a computer with an Intel i processor, 8 GB RAM, under 64-bit operating system and 64-bit MATLAB R2011b. The algorithm parameters of Global were set to default. The solver needs to set an initial search interval for every variable, so we set the [ 100; 100] initial interval for every variable in our self-made test cases (Table 2), and we used the usual bounds for the standard problems (for details see for example Appendix A in [10]). In the transformed cases, the bounds were transformed respectively. In Tables 4, 5, and 6 bold emphasizing denotes the better options of related indicators. Table 4 shows the optimal function values reached by Global with the above mentioned parameters. Fbest denotes the minimal optimum value estimate, Fmean is the mean of the reached optimum values of the 100 runs, and Fvar indicates the variance of the reached minimum values. The true global optimum values were reached in every independent run for both of the original and the transformed forms in 8 of 17 cases. In further 2 cases (RB and R5) only the transformed form allowed the solver to reach the minimum value in every 100 runs. The transformed form was easier or equivalently difficult to solve with Global in 13 of 17 cases, but in 4 cases the original form was slightly more favorable. Let us compare the numbers of function evaluations realized by the solver in a run (Table 5). Here NumEvalmean refers to the average of the number of function evaluations, and NumEvalvar gives the variance of the same indicator. The transformed problems needed fewer function evaluations in 14 of 17 cases, and in some cases it outperformed the original form much. For example, Global needs only 17% of the original number of function evaluations for finding the optimum of the Rosenbrock problem in the transformed form, 80% of the original for the transformed Branin, and 11% of the original number of function evaluations for the transformed Sq2 problem. The original form of L8 and L10 was slightly better for the solver in terms of the number of function evaluations and the found minimum values. The original form of the R5 test function needed less function evaluation, but did not enabled the solver to find the global optima in every run, while the transformed one did. Coming to the point of running times (Table 6), the transformed problem form allows in almost every cases Global to run a bit quicker than on the original
14 14 Elvira Antal, Tibor Csendes problem form. The average relative improvement in the running times of the whole test set is 32.5%. However, this indicator is better for our custom made problems (59.1%), and weaker for the standard problems (8.8%). We can conclude the computation tests that the successful equivalent transformations of Table 2 and 3, have a substantial improvement on the performance of a traditional global optimization solver. 5 Summary We have presented new theoretical results on automatic symbolic transformations to simplify constrained nonlinear optimization problems. An extensive computational test has been completed on standard global optimization test problems and on other often used global optimization test functions together with some custom made problems designed especially to test the capabilities of symbolic simplification algorithms. Maple and Mathematica based implementations were compared. The bottom line of the test results is that most of the simplifyable cases were recognized by our new, Mathematica based algorithm, and the substitutions were correct. Tests with a common numerical solver, Global were performed to check the usefulness of the produced transformations. We also reported a few unresolved test instances, and indicated possible further extensions to improve the efficiency of interval global optimization procedures. The introduced computational technique can directly be applied to produce high level programming language code such as Fortran or C. Also, we plan to prepare a web site that will run our program on the server side to simplify user specified optimization problems. Acknowledgements This work was partially supported by the Grant TÁMOP A- 11/1/KONV References 1. Alefeld, G. and J. Herzberger: Introduction to Interval Computation. Academic Press, New York, Antal, E., T. Csendes, and J. Virágh: Nonlinear Transformations for the Simplification of Unconstrained Nonlinear Optimization Problems. Central European J. Operations Research 21(4), (2013) 3. Avanzolini, G., and P. Barbini: Comment on Estimating Respiratory Mechanical Parameters in Parallel Compartment Models, IEEE Transactions on Biomedical Engineering 29, (1982) 4. Csendes, T., L. Pál, J.O.H. Sendín and J.R. Banga: The GLOBAL Optimization Method Revisited. Optimization Letters 2, (2008) 5. Csendes, T. and T. Rapcsák: Nonlinear Coordinate Transformations for Unconstrained Optimization. I. Basic Transformations. J. Global Optimization 3(2), (1993) 6. Gay, D.M.: Symbolic-Algebraic Computations in a Modeling Language for Mathematical Programming. In Symbolic Algebraic Methods and Verification Methods, G. Alefeld, J. Rohn, and T. Yamamoto, eds, Springer-Verlag, (2001) 7. Liberti, L.: Reformulations in Mathematical Programming: Definitions and Systematics. RAIRO-Operations Research 43, (2009) 8. Liberti, L., S. Cafieri, and D. Savourey: The Reformulation-Optimization Software Engine. Mathematical Software ICMS 2010, Lecture Notes in Computer Science 6327, (2010)
15 Nonlinear Transformations for Simplifying Optimization Problems Mészáros, Cs. and U.H. Suhl: Advanced preprocessing techniques for linear and quadratic programming. OR Spectrum 25(4), (2003) 10. Pál, L.: Global optimization algorithms for bound constrained problems. PhD Thesis, University of Szeged (2011) 11. Rapcsák, T. and T. Csendes: Nonlinear Coordinate Transformations for Unconstrained Optimization. II. Theoretical Background. J. Global Optimization 3(3), (1993) 12. Stoutemyer, D.R.: Ten commandments for good default expression simplification. J. Symbolic Computation 46, (2011) 13. Tóth, B. and T. Csendes: Empirical investigation of the convergence speed of inclusion functions. Reliable Computing 11, (2005)
16 16 Elvira Antal, Tibor Csendes Table 3 Results for the standard global optimization test functions solved by the new method ID Dim. New variables Substitutions Problem type Result type BR 2 y = [x 1, y 1 ] y 1 = A (5/π)x x x 2 Easom 2 y = x none D 2 G5 5 y = x none D 2 G7 7 y = x none D 2 GP 2 y = x none D 2 H3 3 y = x none D 2 H6 6 y = x none D 2 L1 1 y = x none D 2 L2 1 y = x none D 2 L3 2 y = x none D 2 L5 2 y = x none D 2 L8 3 y = [y 1, y 2, y 3 ] y 1 = (x 1 1)/4, A 1 y 2 = (x 2 1)/4, y 3 = (x 3 1)/4 L9 4 y = [y 1, y 2, y 3, y 4 ] y 1 = (x 1 1)/4, y 2 = (x 2 1)/4, y 3 = (x 3 1)/4, A 1 L10 5 y = [y 1, y 2, y 3, y 4, y 5 ] L11 8 y = [y 1, y 2, y 3, y 4, y 5, y 6, y 7, y 8 ] y 4 = (x 4 1)/4 y 1 = (x 1 1)/4, y 2 = (x 2 1)/4, y 3 = (x 3 1)/4, y 4 = (x 4 1)/4, y 5 = (x 5 1)/4 y 1 = (x 1 1)/4, y 2 = (x 2 1)/4, y 3 = (x 3 1)/4, y 4 = (x 4 1)/4, y 5 = (x 5 1)/4, y 6 = (x 6 1)/4, y 7 = (x 7 1)/4, y 8 = (x 8 1)/4 A 1 A 1 L12 10 y = x Out of time. A 2 L13 2 y = x none D 2 L14 3 y = x none D 2 L15 4 y = x none D 2 L16 5 y = x none D 2 L18 7 y = x none D 2 Sch21 (Beale) 2 y = x none C 2 Sch214 (Powell) 4 y = x none D 2 Sch218 (Matyas) 2 y = x none D 2 Sch227 2 y = x none D 2 Sch25 (Booth) 2 y = x none C 2 Sch31 3 y = x none D 2 Sch31 5 y = x none D 2 Sch32 2 y = [y 1, y 2 ] y 1 = x 1 x 2 2, A 1 y 2 = x 2 1 Sch32 3 y = x none D 2 Sch37 5 y = x none D 2 Sch37 10 y = x none D 2 SHCB 2 y = x none D 2 THCB 2 y = x none D 2 Rastrigin 2 y = x none C 2 RB 2 y = [y 1, y 2 ] y 1 = x 2 1 x 2, y 2 = A 1 1 x 1 RB5 5 y = y 1 = x 2 4 x 5 A 1 [x 1, x 2, x 3, x 4, y 1 ] S5 4 y = x none D 2 S7 4 y = x none D 2 S10 4 y = x Out of time. D 2 R4 2 y = x none C 2 R5 3 y = [x 1, x 2, y 2 ] y 1 = 3 + x 3, A 1 y 2 = (Πy 1 )/4 R6 5 y = x Out of time. A 2 R7 7 y = x Out of time. A 2 R8 9 y = x Out of time. A 2
17 Nonlinear Transformations for Simplifying Optimization Problems 17 Table 4 Optimal function values found by Global Original problem Transformed problem ID Fbest Fmean Fvar Fbest Fmean Fvar Exp Exp Sq Sq SqCos SqExp SqExp CosExp BR L L L L RB RB R Sch Table 5 Number of function evaluations of Global Original problem Transformed problem ID NumEvalmean NumEvalvar NumEvalmean NumEvalvar Exp Exp Sq Sq SqCos SqExp SqExp CosExp BR L L L L RB RB R Sch
18 18 Elvira Antal, Tibor Csendes Table 6 Running times of Global in seconds Original problem Transformed problem ID Tmean Tvar Tmean Tvar Exp Exp Sq Sq SqCos SqExp SqExp CosExp BR L L L L RB RB R Sch
Nonlinear Symbolic Transformations for Simplifying Optimization Problems
Acta Cybernetica 00 (0000) 1 17. Nonlinear Symbolic Transformations for Simplifying Optimization Problems Elvira Antal and Tibor Csendes Abstract The theory of nonlinear optimization traditionally studies
More informationNonlinear Coordinate Transformations for Unconstrained Optimization I. Basic Transformations
Nonlinear Coordinate Transformations for Unconstrained Optimization I. Basic Transformations TIBOR CSENDES Kalmár Laboratory, József Attila University, Szeged, Hungary and TAMÁS RAPCSÁK Computer and Automation
More informationGlobal optimization on Stiefel manifolds some particular problem instances
6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Global optimization on Stiefel manifolds some particular problem instances János Balogh Department of Computer Science,
More informationFind all of the real numbers x that satisfy the algebraic equation:
Appendix C: Factoring Algebraic Expressions Factoring algebraic equations is the reverse of expanding algebraic expressions discussed in Appendix B. Factoring algebraic equations can be a great help when
More informationCHAPTER 1: Functions
CHAPTER 1: Functions 1.1: Functions 1.2: Graphs of Functions 1.3: Basic Graphs and Symmetry 1.4: Transformations 1.5: Piecewise-Defined Functions; Limits and Continuity in Calculus 1.6: Combining Functions
More informationchapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS
chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader
More informationNotes on Computer Systems for Solving Symbolic Equations
Notes on Computer Systems for Solving Symbolic Equations Richard J. Fateman March, 1991, revisited 2005 Abstract Math students learn to solve single equations symbolically beginning in elementary or junior
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationA Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints
Noname manuscript No. (will be inserted by the editor) A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints Mai Zhou Yifan Yang Received: date / Accepted: date Abstract In this note
More informationAlgebra Performance Level Descriptors
Limited A student performing at the Limited Level demonstrates a minimal command of Ohio s Learning Standards for Algebra. A student at this level has an emerging ability to A student whose performance
More information36106 Managerial Decision Modeling Linear Decision Models: Part II
1 36106 Managerial Decision Modeling Linear Decision Models: Part II Kipp Martin University of Chicago Booth School of Business January 20, 2014 Reading and Excel Files Reading (Powell and Baker): Sections
More informationOPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM
Advanced Logistic Systems Vol. 6. No. 1. (2012) pp. 117-126. OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM LÁSZLÓ KOTA 1 Abstract: In this article I show a firefly optimization
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationSolving Global Optimization Problems by Interval Computation
Solving Global Optimization Problems by Interval Computation Milan Hladík Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, http://kam.mff.cuni.cz/~hladik/
More informationSeparable First-Order Equations
4 Separable First-Order Equations As we will see below, the notion of a differential equation being separable is a natural generalization of the notion of a first-order differential equation being directly
More information1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx
PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of
More informationx y = 2 x + 2y = 14 x = 2, y = 0 x = 3, y = 1 x = 4, y = 2 x = 5, y = 3 x = 6, y = 4 x = 7, y = 5 x = 0, y = 7 x = 2, y = 6 x = 4, y = 5
List six positive integer solutions for each of these equations and comment on your results. Two have been done for you. x y = x + y = 4 x =, y = 0 x = 3, y = x = 4, y = x = 5, y = 3 x = 6, y = 4 x = 7,
More informationDesigning Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it
More informationQUADRATIC MAJORIZATION 1. INTRODUCTION
QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et
More informationNumerical mathematics with GeoGebra in high school
Herceg 2009/2/18 23:36 page 363 #1 6/2 (2008), 363 378 tmcs@inf.unideb.hu http://tmcs.math.klte.hu Numerical mathematics with GeoGebra in high school Dragoslav Herceg and Ðorđe Herceg Abstract. We have
More informationAdvice for Mathematically Rigorous Bounds on Optima of Linear Programs
Advice for Mathematically Rigorous Bounds on Optima of Linear Programs Jared T. Guilbeau jtg7268@louisiana.edu Md. I. Hossain: mhh9786@louisiana.edu S. D. Karhbet: sdk6173@louisiana.edu R. B. Kearfott:
More informationComputational Tasks and Models
1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to
More informationChapter 1 Mathematical Preliminaries and Error Analysis
Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list
More informationRadiological Control Technician Training Fundamental Academic Training Study Guide Phase I
Module 1.01 Basic Mathematics and Algebra Part 4 of 9 Radiological Control Technician Training Fundamental Academic Training Phase I Coordinated and Conducted for the Office of Health, Safety and Security
More informationSupplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationNUMERICAL MATHEMATICS & COMPUTING 6th Edition
NUMERICAL MATHEMATICS & COMPUTING 6th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 September 1, 2011 2011 1 / 42 1.1 Mathematical
More informationAlgebra Exam. Solutions and Grading Guide
Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full
More informationGenerating p-extremal graphs
Generating p-extremal graphs Derrick Stolee Department of Mathematics Department of Computer Science University of Nebraska Lincoln s-dstolee1@math.unl.edu August 2, 2011 Abstract Let f(n, p be the maximum
More informationChapter 1 Computer Arithmetic
Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationGlobal Optimization by Interval Analysis
Global Optimization by Interval Analysis Milan Hladík Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic, http://kam.mff.cuni.cz/~hladik/
More informationTime: 1 hour 30 minutes
Paper Reference(s) 6663/0 Edexcel GCE Core Mathematics C Gold Level G5 Time: hour 30 minutes Materials required for examination Mathematical Formulae (Green) Items included with question papers Nil Candidates
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationA Note on the Complexity of Network Reliability Problems. Hans L. Bodlaender Thomas Wolle
A Note on the Complexity of Network Reliability Problems Hans L. Bodlaender Thomas Wolle institute of information and computing sciences, utrecht university technical report UU-CS-2004-001 www.cs.uu.nl
More informationThe Derivative of a Function
The Derivative of a Function James K Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University March 1, 2017 Outline A Basic Evolutionary Model The Next Generation
More informationExtension of continuous functions in digital spaces with the Khalimsky topology
Extension of continuous functions in digital spaces with the Khalimsky topology Erik Melin Uppsala University, Department of Mathematics Box 480, SE-751 06 Uppsala, Sweden melin@math.uu.se http://www.math.uu.se/~melin
More informationarxiv: v1 [math.na] 28 Jun 2013
A New Operator and Method for Solving Interval Linear Equations Milan Hladík July 1, 2013 arxiv:13066739v1 [mathna] 28 Jun 2013 Abstract We deal with interval linear systems of equations We present a new
More informationIntroduction to Numerical Analysis
Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationhp calculators HP 33S Using the formula solver part 2 Overview of the formula solver Practice Example: A formula with several variables
Overview of the formula solver Practice Example: A formula with several variables Practice Example: A direct solution Practice Example: A formula with a pole Practice Example: Where two functions intersect
More informationFast and accurate Bessel function computation
0 Fast and accurate Bessel function computation John Harrison, Intel Corporation ARITH-19 Portland, OR Tue 9th June 2009 (11:00 11:30) 1 Bessel functions and their computation Bessel functions are certain
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More information2.2 Separable Equations
82 2.2 Separable Equations An equation y = f(x, y) is called separable provided algebraic operations, usually multiplication, division and factorization, allow it to be written in a separable form y =
More informationMathematics for Intelligent Systems Lecture 5 Homework Solutions
Mathematics for Intelligent Systems Lecture 5 Homework Solutions Advanced Calculus I: Derivatives and local geometry) Nathan Ratliff Nov 25, 204 Problem : Gradient and Hessian Calculations We ve seen that
More informationA Branch-and-Bound Algorithm for Unconstrained Global Optimization
SCAN 2010, Lyon, September 27 30, 2010 1 / 18 A Branch-and-Bound Algorithm for Unconstrained Global Optimization Laurent Granvilliers and Alexandre Goldsztejn Université de Nantes LINA CNRS Interval-based
More information(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).
CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,
More informationEstimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving
Estimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving Henning Burchardt and Stefan Ratschan October 31, 2007 Abstract We formulate the problem of estimating
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationJUST THE MATHS UNIT NUMBER 1.5. ALGEBRA 5 (Manipulation of algebraic expressions) A.J.Hobson
JUST THE MATHS UNIT NUMBER 1.5 ALGEBRA 5 (Manipulation of algebraic expressions) by A.J.Hobson 1.5.1 Simplification of expressions 1.5.2 Factorisation 1.5.3 Completing the square in a quadratic expression
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More information3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:
3 Algebraic Methods b The first appearance of the equation E Mc 2 in Einstein s handwritten notes. So far, the only general class of differential equations that we know how to solve are directly integrable
More information2012 Assessment Report. Mathematics with Calculus Level 3 Statistics and Modelling Level 3
National Certificate of Educational Achievement 2012 Assessment Report Mathematics with Calculus Level 3 Statistics and Modelling Level 3 90635 Differentiate functions and use derivatives to solve problems
More informationName Period Date. QUADRATIC FUNCTIONS AND EQUATIONS Student Pages for Packet 2: Solving Quadratic Equations 1
Name Period Date QUAD2.1 QUAD2.2 QUAD2.3 The Square Root Property Solve quadratic equations using the square root property Understand that if a quadratic function is set equal to zero, then the result
More informationTransition Guide TRANSITION GUIDE FOR CALCULUS NINTH EDITION BY LARSON, EDWARDS (PART I) General Changes to Calculus Ninth Edition
TRANSITION GUIDE FOR CALCULUS NINTH EDITION BY LARSON, EDWARDS (PART I) General Changes to Calculus Ninth Edition The Chapter Openers have been revised. The graphing overview of the chapter is picked up
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationComputational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs
Computational Integer Programming Universidad de los Andes Lecture 1 Dr. Ted Ralphs MIP Lecture 1 1 Quick Introduction Bio Course web site Course structure http://coral.ie.lehigh.edu/ ted/teaching/mip
More informationReview of Optimization Methods
Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,
More informationA Fast Heuristic for GO and MINLP
A Fast Heuristic for GO and MINLP John W. Chinneck, M. Shafique, Systems and Computer Engineering Carleton University, Ottawa, Canada Introduction Goal: Find a good quality GO/MINLP solution quickly. Trade
More informationIsomorphisms between pattern classes
Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.
More informationCalculus and Maximization I
Division of the Humanities and Social Sciences Calculus and Maximization I KC Border Fall 2000 1 Maxima and Minima A function f : X R attains a global maximum or absolute maximum over X at x X if f(x )
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationExaminer s Report Pure Mathematics
MATRICULATION AND SECONDARY EDUCATION CERTIFICATE EXAMINATIONS BOARD UNIVERSITY OF MALTA, MSIDA MA TRICULATION EXAMINATION ADVANCED LEVEL MAY 2013 Examiner s Report Pure Mathematics Page 1 of 6 Summary
More informationWritten Exam Linear and Integer Programming (DM545)
Written Exam Linear and Integer Programming (DM545) Department of Mathematics and Computer Science University of Southern Denmark Monday, June 22, 2015, 10:00 14:00, Festsalen, Niels Bohr Allé 1 The exam
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationChapter 2. Review of Mathematics. 2.1 Exponents
Chapter 2 Review of Mathematics In this chapter, we will briefly review some of the mathematical concepts used in this textbook. Knowing these concepts will make it much easier to understand the mathematical
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 13 Competitive Optimality of the Shannon Code So, far we have studied
More informationResearch Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems
Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear
More informationWeb Solutions for How to Read and Do Proofs
Web Solutions for How to Read and Do Proofs An Introduction to Mathematical Thought Processes Sixth Edition Daniel Solow Department of Operations Weatherhead School of Management Case Western Reserve University
More informationSharpening the Karush-John optimality conditions
Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationGuarded resolution for Answer Set Programming
Under consideration for publication in Theory and Practice of Logic Programming 1 Guarded resolution for Answer Set Programming V.W. Marek Department of Computer Science, University of Kentucky, Lexington,
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationPhysics 250 Green s functions for ordinary differential equations
Physics 25 Green s functions for ordinary differential equations Peter Young November 25, 27 Homogeneous Equations We have already discussed second order linear homogeneous differential equations, which
More informationSupplementary Logic Notes CSE 321 Winter 2009
1 Propositional Logic Supplementary Logic Notes CSE 321 Winter 2009 1.1 More efficient truth table methods The method of using truth tables to prove facts about propositional formulas can be a very tedious
More informationDesigning Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way
EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate
More informationR u t c o r Research R e p o r t. The Branch and Bound Method
R u t c o r Research R e p o r t The Branch and Bound Method Béla Vizvári a RRR 26-2010, December 2010 RUTCOR Rutgers Center for Operations Research Rutgers University 640 Bartholomew Road Piscataway,
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationPhysicsAndMathsTutor.com
. A curve C has equation x + y = xy Find the exact value of at the point on C with coordinates (, ). (Total marks). The curve C has the equation cosx + cosy =, π π x, 4 4 0 π y 6 (a) Find in terms of x
More informationDecision Procedures An Algorithmic Point of View
An Algorithmic Point of View ILP References: Integer Programming / Laurence Wolsey Deciding ILPs with Branch & Bound Intro. To mathematical programming / Hillier, Lieberman Daniel Kroening and Ofer Strichman
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLies My Calculator and Computer Told Me
Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing
More informationLecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.
Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 3 1 Terminology For any complexity class C, we define the class coc as follows: coc def = { L L C }. One class
More informationEXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS
EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS JIANWEI DIAN AND R BAKER KEARFOTT Abstract Traditional computational fixed point theorems, such as the Kantorovich theorem (made rigorous
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationThe simplex algorithm
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,
More informationSequences & Functions
Ch. 5 Sec. 1 Sequences & Functions Skip Counting to Arithmetic Sequences When you skipped counted as a child, you were introduced to arithmetic sequences. Example 1: 2, 4, 6, 8, adding 2 Example 2: 10,
More informationChapter 9: Systems of Equations and Inequalities
Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.
More informationEXAMPLES OF PROOFS BY INDUCTION
EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming
More informationApplication of Statistical Techniques for Comparing Lie Algebra Algorithms
Int. J. Open Problems Comput. Math., Vol. 5, No. 1, March, 2012 ISSN 2074-2827; Copyright c ICSRS Publication, 2012 www.i-csrs.org Application of Statistical Techniques for Comparing Lie Algebra Algorithms
More informationA Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization
A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization Sven Leyffer 2 Annick Sartenaer 1 Emilie Wanufelle 1 1 University of Namur, Belgium 2 Argonne National Laboratory, USA IMA Workshop,
More information