Cost Minimization of a Multiple Section Power Cable Supplying Several Remote Telecom Equipment
|
|
- Roland Beasley
- 6 years ago
- Views:
Transcription
1 Cost Minimization of a Multiple Section Power Cable Supplying Several Remote Telecom Equipment Joaquim J Júdice Victor Anunciada Carlos P Baptista Abstract An optimization problem is described, that arises in telecommunications and is associated with multiple cross-sections of a single power cable used to supply remote telecom equipments The problem consists of minimizing the volume of copper material used in the cables and consequently the total cable cost Two main formulations for the problem are introduced and some properties of the functions and constraints involved are presented In particular it is shown that the optimization problems are convex and have a unique optimal solution A Projected Gradient algorithm is proposed for finding the global minimum of the optimization problem, taking advantage of the particular structure of the second formulation An analysis of the performance of the algorithm for given real-life problems is also presented Keywords: Nonlinear Optimization, Convex Programming, Energy Systems, Telecommunications 1 Introduction This work describes the formulation and resolution of an engineering optimization problem that arose in a project of the Instituto de Telecomunicações (Portuguese research institute for telecommunications) with the company Portugal Telecom [1] The problem consists of optimizing the multiple cross-sections of a single power cable used to supply remote telecom equipment Electric power transmission is usually operated at an almost constant voltage In fact, electric appliances and equipment require a constant supply voltage, named the nominal Departamento de Matemática, Universidade de Coimbra and Instituto de Telecomunicações, Portugal JoaquimJudice@coitpt Instituto Superior Técnico, UTL and Instituto de Telecomunicações, Portugal avaa@lxitpt V Anunciada passed away on September 29, 2007 Escola Superior de Tecnologia, Instituto Politécnico de Tomar, Portugal CarlosPerquilhas@aimesttiptpt 1
2 operating voltage In alternating current (AC) distribution networks, the nominal operating voltage is 230 V in Europe and 110 V in USA Those values are the root mean square value of a sinusoidal voltage that is a function of time and frequency (frequencies of 50 Hz in Europe and 60 Hz in USA) Electric appliances may accept only narrow variations of those voltages, typically tolerances of +10%, -15% If an electrical generator operates at a fixed voltage V and a distribution line with several nodes, where loads are connected, there are voltage drops along the line, described by Ohm s law (the voltage drop is proportional to the line impedance and line current, and line impedance is proportional to line length) However, when currents are at a maximum value in any one of the nodes, the voltage must be kept in the interval of acceptable voltages Consequently, power distribution is made with oversized copper or aluminum cables The maximum length of the distribution cables is limited, in order to avoid excessive voltage drops or excessive cross sections of the cables Usually the cable length is limited to be less than 2 km The cables cross sections are standardized considering the maximum current in the system and the maximum allowed voltage drops For larger distances, the distribution networks use transformers in order to increase operating voltage to 10 kv or 15 kv, reducing the line currents in the same proportion For very long distances or large power, the voltage increases up to 440 kv or more Power distribution operates at an almost constant voltage Consequently, power lines are interconnected with transformers in order to keep acceptable sizes of the cables Industry has standardized cables, maximum distances and operating voltages in order to obtain acceptable values for the system cost and voltage drops A different situation occurs when a small amount of energy is required very far from any possible supply point This happens on a motorway, far from any town where it is required to power a video camera and the radio transmitter The use of optic fibers in long distance telecommunications implies the use of electronic signal conditioning equipment placed along the fibers, in places that can be more than 100 km from the nearest power supply It is possible to construct a copper cable for energy supply along the fiber cable However, such long distances imply very large voltage drops or bulky and costly cables, assuming a constant voltage operation In the above mentioned project, the authors considered the possibility of having very large voltage drops in the cable, in order to avoid large cable costs This option is possible because telecom equipment require voltage regulators and storage batteries that can operate with voltage tolerances up to 50% When the power cable supplies several devices in different places along the cable, the cross section of each part of the cable should be optimized As shown in this paper, the model under consideration can be formulated as a nonlinear programming problem We are able to show that the constraint set of this program is bounded and the objective function is strictly convex on this constraint set Hence the optimization problem has a unique optimal solution, which can easily be found by a nonlinear program code, such as MINOS [14] By exploiting the structure of this optimization problem, it is also 2
3 possible to reformulate it as a strictly convex nonlinear program on a simplex This latter optimization problem can be efficiently processed by a Projected Gradient algorithm [3, 4, 5] that fully exploits the structure of the constraint set Computational experience with small instances of the problem illustrates the validity of the formulation for its purpose and of the Projected Gradient algorithm to process the associated nonlinear program The organization of the paper is as follows In section 2 the model and its formulation are introduced A heuristic procedure used in [1] is reviewed in section 3 to find a feasible solution to the associated optimization problem The solution of the nonlinear program under consideration is fully discussed in section 4 The alternative formulation with simplex constraints and the Projected Gradient algorithm for its solution are described in sections 5 and 6 Finally some computational experience and conclusions are reported 2 Model Description A long cable with two conducting wires is supplied at one end by a constant voltage generator with a known voltage v I The cable is constituted by n sections The cross-sections of the conductors may differ from section to section (although in each section of the cable the cross-section of both conducting wires should be the same) If l i is the length of each section i (i = 1, 2,, n), then the total length of the cable is n i=1 l i Besides the voltage generator and the n sections, we assume the existence of n nodes along the cable, each representing an electrical load Section i connects two consecutive nodes i 1 and i, thus Section i Node i 1 Node i where i = 1, 2,, n (and node 0 coincides with the voltage generator) Therefore node i connects two consecutive sections i and i + 1: Section i Section i + 1 Node i where i = 1, 2,, n 1, with the exception of node n The load in each node i is known and is characterized by its constant power load p i The load current l c i in each node i depends on the voltage v i in that node, according to The voltage decrease v i along each section i ( v i law [10], l c i = p i v i, i = 1, 2,, n (1) = v i 1 v i ) is, according to Ohm s v i = v i 1 v i = r i c i, (2) 3
4 where r i and c i are the resistance of the cable and the current in section i, i = 1, 2,, n As the cable is made of two conducting wires, the resistance of each cable section is twice the resistance of each conductor in that section Hence r i = 2 ρ l i s i, (3) where ρ is the material resistivity, l i is the length of the section, and s i is the area of the cross section The current c i in section i is the sum of the currents in the loads at nodes i, i + 1,, n c i = n lc j = n v j (4) Consequently the current in section i (i = 1, 2,, n) depends on the voltage in all later nodes of this section, as the power loads p i are constant Now, using the expressions of r i and s i in (3) and (4), we obtain from (2) Therefore for i = 1, 2,, n v i 1 v i = 2 ρ l i n s v j (5) i v i = v i 1 2 ρ l i s i n v j (6) Another data item is the voltage v n in the last node n of the cable Considering that the voltage v 0 of node 0 coincides with the initial voltage v I, then where a is a positive given constant satisfying 1 2 a < 1 1 v n = a v 0, (7) The volume of each section i of the cable is equal to 2 l i s i Therefore, the total volume of cable material is V = n i=1 2 l i s i (8) As the price of the cable is almost proportional to its volume, its total cost C is given by C = c V, where c is a given positive constant C = c n i=1 2 l i s i = n i=1 2 c l i s i (9) The objective of the problem is to determine the cross-section area s i of the conducting wires in each section that minimizes the total cost of the cable This is equivalent to minimizing its volume V The values p i and l i, i = 1, 2,, n, as well as v 0, v n, a, ρ and c are data of the optimization problem, and we have to find the cross-section areas s i, i = 1, 2,, n, that minimize C (or equivalently V ) The problem also contains the variables v 1, v 2,, v n 1, which are related to s i, i = 1, 2,, n by (6) 1 It is possible to show that if a < 1, the total volume of the cable increases as the voltage vn decreases [7] 2 4
5 The model described can then be formulated as the following nonlinear optimization problem: Minimize subject to n i=1 2 c l i s i (10) v i = v i 1 2 ρ l i n, i = 1, 2,, n, s i v j (11) v n = a v 0, (12) v i 1 > v i, i = 1, 2,, n, (13) s i, v i > 0, i = 1, 2,, n, (14) where the power loads p i (i = 1, 2,, n), the lengths l i (i = 1, 2,, n), the voltages v 0 and v n, and the constants a, ρ and c, are data, and the cross-sections s i (i = 1, 2,, n) and the voltages v 1, v 2, v n 1 are the variables of the optimization problem The objective in (10) is the minimization of the total cost of the cable or equivalently of its total volume 3 An Initial Feasible Solution for the Optimization Problem As stated in [1], a first feasible solution for the optimization problem can be constructed by assuming that in each cable section i, i = 1, 2,, n, the cross section s i is proportional to the current c i Thus s i = S c i, and from (4), s i = S n lc j, with S > 0 The process of finding such a solution reduces to the computation of the proportionality constant S Suppose that all the loads along the cable are concentrated in a unique node, so that the cable is made of a single section with length L = n i=1 l i and by two nodes, one at each end, namely the voltage generator and the current node This last node concentrates the electrical loads, whence it has a load power P = n i=1 p i Since the voltage in this node is a v 0, we can easily compute the load current c in the last node, the resistance r in the section, and the cross-section s in the conducting wires of that section by c = P a v 0, (15) Therefore the reference cross-section S is defined by r = v 0 a v 0 = a (1 a) v2 0, (16) c P s = 2 ρ L 2 ρ LP = r a (1 a) v0 2 (17) S = s c = The cross-sections s i are computed from 2 ρ L (1 a) v 0 (18) s i = S n lc j, i = 1, 2,, n (19) 5
6 The voltages v i can now be obtained recursively from (6) by v i 1 = v i + 2 ρ l i s i n v j, i = 1, 2,, n (20) In the particular case where p 1 = p 2 = = p n, with v n = v 0, the voltages can be 2 determined as follows: v i 1 v i = l i n i=1 l (v 0 v n ), i = 1, 2,, n (21) i Then by (20) the cross-sections satisfy n v j s i = 2 ρ l i, i = 1, 2,, n (22) v i 1 v i Computational experience (section 7) shows that this process finds a feasible solution that is in general a tight upper bound to the global minimum value of the optimization problem It is also important to note that, since s i = S n lc j, equations (5) can be rewritten as v i = v i 1 v i = 2 ρ S l i This means that for the feasible solution obtained through this heuristic method, the voltage decrease v i in each section i is proportional to the length l i of this section 4 Solution of the Nonlinear Optimization Problem Consider again the model described in the section 2 Looking carefully at this formulation, we note that the constraints s i, v i > 0, i = 1, 2,, n, are redundant In fact, according to (12) and (13), we have v 0 > v 1 > v 2 > > v n = a v 0 Since a and v 0 are positive, then v i a v 0 > 0, for all i = 1, 2,, n As v i 1 v i > 0, ρ > 0, l i > 0 and > 0, for all j, then s i > 0, for all i = 1, 2,, n Then the variables s i can be assumed to be unrestricted in sign and can be eliminated from the problem to obtain the following nonlinear program in the variables v i : NLP: Minimize v R n n 4 ρ c n v j i=1 l2 i (23) v i 1 v i subject to v i 1 > v i, i = 1, 2,, n, (24) v n = a v 0, (25) where v 0, a, ρ, c, l i, are positive data After a global minimum is obtained for this optimization problem NLP, then the cross-section areas s i can be found from (11) The feasible set of this program is not closed, whence the Weierstrass Theorem [2] cannot be used to guarantee the existence of a minimum To overcome this difficulty, we consider the additional variables z i, defined by z i = v i 1 v i, i = 1, 2,, n, and the nonlinear program: 6
7 n NLP1: Minimize f(z, v) = 4 ρ c n v j i=1 l2 i (26) z i subject to z i v i 1 + v i = 0, i = 1, 2,, n, (27) z i ε, i = 1, 2,, n, (28) v n = a v 0, (29) where ε is a positive and small real number (in practice ε = 10 6 ) As for each i = 1, 2,, n, v i 1 > v i z i = v i 1 v i > 0, then we guarantee that the function f of NLP1 has a global minimum on the set K = {(z, v) R n n satisfying (27) (29)} Consider now the objective function of NLP1, the n functions f i defined by and the sets f i (z i, v i,, v n ) = n v j z i, i = 1, 2,, n (30) K i = {(z i, v i,, v n ) R n i+2 : z i > 0; v j > 0, j = i,, n}, i = 1, 2,, n (31) Therefore for all (z, v) K, f(z, v) = 4 ρ c n i=1 l2 i f i(z i, v i,, v n ) (32) Then f is strictly convex on K, as the following property holds: Theorem The function f i ( i = 1, 2,, n ) defined by (30) is strictly convex on K i Proof: The gradient of f i at each point of K i always exists and is given by The Hessian of f i is 1 zi 2 f i (z i, v i,, v n ) = 7 ( n v j 1 z i ( pi v 2 i 1 z i ( pn v 2 n ) ) )
8 where and A 2 is given by A 2 = 2 f i (z i, v i,, v n ) = A 1 + A 2, ( 1 A 1 = diag zi 3 1 z 3 i ( n p i zi 2 v2 i p i+1 zi 2 v2 i+1 p n z 2 i v2 n ( n ) v j p i zi 2 v2 i p i z i vi 3 0 ), v j ) p i p n z i vi 3,, z i vn 3 p i+1 z 2 i v2 i+1 p n z 2 i v2 n 0 0 p i+1 z i v 3 i p n z i v 3 n Now A 2 can be written as follows: [ α A 2 = v n i+1 and where α > 0, v ( ) pi p i+1 p n D = diag z i vi 3, z i vi+1 3,, z i vn 3 Since all diagonal elements of D are positive and the Schur Complement (A 2 D) of D in A 2 [6] is zero, then A 2 is Symmetric Positive Semi-Definite As A 1 is Symmetric Positive Definite, then 2 f i (z i, v i,, v n ) is Symmetric Positive Definite in K i [6] and f i is strictly convex on K i v T D ], Since K is compact and f is strictly convex on K, then f has a unique stationary point on K, which is exactly its unique global minimum [2] As a stationary point of f on K may be found by a local nonlinear programming algorithm, it is easy to obtain the unique global minimum of the function on K and thus to solve the optimization problem 5 An Alternative Formulation on the Simplex Consider the program NLP1, introduced in the previous section We can write the constraints of the program as follows: Lv + z = b, v n = a v 0, z i ε, i = 1, 2,, n, 8
9 where L = R n n, b = v R n, z = z 1 z 2 z 3 z n R n As L is a triangular nonsingular matrix, we can eliminate the variables v i by solving Lv = b z Then Furthermore v i = v 0 i k=1 z k, i = 1, 2,, n (33) v n = a v 0 v 0 (z 1 + z z n ) = a v 0 z 1 + z z n = (1 a) v 0 n j=1 z j = (1 a) v 0 Then NLP is equivalent to NLP2: Minimize f(z) = 4 ρ c n li 2 i=1 z i subject to e T z = (1 a) v 0, z i ε, i = 1, 2,, n, ( n v 0 j k=1 z k ) where e n is a vector of ones The feasible set of this program is given by K = {z R n : e T z = (1 a) v 0 ; z i ε, i = 1, 2,, n} (34) Hence K is a compact set and by Weierstrass Theorem, the function f has a global minimum on K Moreover, the function of NLP2 is obtained from the function of NLP by a linear transformation with a nonsingular matrix Therefore this function f is strictly convex on K [13] Hence the computation of the unique global minimum of NLP2 may be done by a local nonlinear programming method In the next section we introduce a Projected Gradient algorithm that finds the unique stationary point of f on K by taking advantage of the structure of the program NLP2 The original variables v i and s i can be obtained afterwards by using formulas (33) and (11) 9
10 6 A Projected Gradient Algorithm for the Nonlinear Program on the Simplex Consider the program NLP2 introduced in the previous section and its constraint set K given by (34) The projected gradient of f in the point z K is defined as the vector in g(z) = P K (z η f(z)) z, (35) n where P K ( ) is the orthogonal projection in K and η > 0 [3] It is known that for a given η > 0, g(z) = 0 if and only if z is a stationary point of f on K [3] The Projected Gradient algorithm is a modification of the Steepest Descent algorithm [8] in which the iterates are forced to stay in K by a projection [4] It can be shown [3, 5] that this algorithm possesses global convergence towards a stationary point under reasonable hypotheses The steps of the algorithm are presented below Projected Gradient Algorithm for NLP2: Step 0: Find an initial solution z K Step 1: (Computation of Search Direction) Compute the direction d n by d = y z, where y = P K (z η f(z)), with η > 0 Step 2: (Stopping Test) If d 2 < tol, then stop: z is a stationary point of f on K Otherwise, go to Step 3 Step 3: (Armijo Criterion) Find α + such that f(z + α d) f(z) + β α f(z) T d, with 0 < β < 1 Step 4: (Iterate Updating) Update the current solution z z + α d and return to Step 2 Next, we discuss some details of the algorithm 10
11 (I) Computation of the Gradient of f By simple manipulation, it is possible to write f(z) as follows: f(z) = = 4 ρ c k lj 2 p n k j=1 z j k=1 k v 0 j=1 z j Therefore it is easy to obtain the following formulas for the components of the gradient vector f(z): i = 1,2,, n f(z) z i = 4 ρ c n k=i k lj 2 p k j=1 z j (v 0 k j=1 z j) 2 l2 i z 2 i n k=i p k v 0 k j=1 z j (36) (II) Computation of the Projection y = P K (z η f(z)) 1 Find u = z η f(z) using (36) to compute f(z) 2 The vector y is the unique optimal solution of the strictly convex quadratic program 1 Minimize y R n 2 u y 2 2 subject to e T y = (1 a) v 0, y i ε, i = 1, 2,, n But u y 2 2 = (u y) T (u y) = u T u 2 u T y + y T y and therefore this program is equivalent to Minimize z R n where b 0 = (1 a) v 0 and q = u q T y yt y subject to e T y = b 0, y i ε, i = 1, 2,, n, There are several algorithms described in the literature to process this kind of quadratic programs [9, 15, 12] Among these processes, the Block Pivotal Principal Pivoting (BLOCK) Algorithm described in [12] is strongly polynomial and very efficient in practice The steps of this process are presented below 11
12 BLOCK Algorithm: Step 0: Let F = { 1, 2,, n } Step 1: Compute λ = b 0 + i F q i F Step 2: (Stopping test) Let H = {i F : q i + λ > ε} If H = φ, stop: the vector ε, if i F z = (z i ) i = 1,2,, n, with z i = (q i + λ), if i F is the unique optimal solution of the quadratic program Otherwise set F = F H and go to Step 1 (III) Computation of the Optimal Values for the Variables v i and s i Let z = (z i ) i = 1,2,, n R n be the optimal solution of the problem NLP2 The variables v i, i = 1, 2,, n, of the original problem satisfy Lv = b z Therefore v i = v i 1 z i, i = 1, 2,, n ( v 0 = v 0 ) (37) Furthermore, (11) provides the values of the variables s i : s i = 2 ρ l i n 7 Computational Experience z i v j, i = 1, 2,, n The computational experience presented in this section was performed using a PC with 3 Ghz Pentium 4 processor and 2 Gb RAM memory, running Linux 2610 The active set code MINOS [14] (one of the nonlinear solvers of GAMS) has been used for processing both the nonlinear programs NLP1 and NLP2 Furthermore, the Projected Gradient algorithm for the program NLP2 was implemented in Fortran 77 [11], using the Gnu Fortran (g77) compiler, version 343, with the options -02 -malign-double -funroll-loops Running times presented in this section are given in CPU seconds, excluding inputs and outputs The times for the Projected Gradient algorithm were measured by the etime() function The results reported in this section are concerned with two actual instances of the problem found in practice, the first (Example 1) presenting reduced voltage decrease and the second 12
13 (Example 2) presenting high voltage decrease The data for these problems are stated below and in Tables 1 and 2: n = 40; ρ = 20 (Ω Km 1 mm 2 ); c > 0 Example 1 (reduced voltage decrease) Example 2 (high voltage decrease) V 0 = 500 V V 0 = 260 V a = 06 V 40 = 300 V V 40 = 190 V a 0731 i length l i (Km) power p i (W) length l i (Km) power p i (W) Table 1: Problem data for Examples 1 and 2 13
14 Example 1 (reduced voltage decrease) Example 2 (high voltage decrease) i length l i (km) power p i (w) length l i (km) power p i (w) Table 2: Problem data for Examples 1 and 2 (continuation) For the first example the cross-section value is S = 9680 (with L = 4840 and P = 7750), and for the second example, is S = 8811 (with L = 1542 and P = ) The heuristic procedure described in section 3 has found the following solutions: For Example 1: v = (v i ) i = 0, 1,, 40 = (500000, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ); s = (s i ) i = 1, 2,, 40 = (195930, , , , , , , , , , , , , , , , , , , , , , , 97987, 96685, 95382, 94077, 92768, 91458, 88823, 83493, 78095, 64396, 50518, 35287, 32104, 25730, 19347, 17744, 16133) 14
15 with total cost C = c For Example 2: v = (v i ) i = 0, 1,, 40 = (260000, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ); s = (s i ) i = 1, 2,, 40 = ( , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 53332) with total cost C = c As mentioned earlier, each of these values is an upper bound for the optimal value given by the nonlinear programs NLP1 and NLP2 The unique optimal solution of those programs found by MINOS is given below: For Example 1: v = (vi ) i = 0, 1,, 40 = (500000, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ); s = (s i ) i = 1, 2,, 40 = (156856, , , , , , , , , , , , , , , , , , , , , , , , , , , , 99429, 97521, 93586, 89506, 78755, 67253, 53127, 49982, 43473, 36549, 34720, 32831) 15
16 For Example 2: v = (vi ) i = 0, 1,, 40 = (260000, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ); s = (s i ) i = 1, 2,, 40 = ( , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ) For both examples, we have used as an initial point for MINOS the default choice given by the code and the solution computed by the heuristic procedure The values of the variables v i show that the constraints z i ε ( ε = 10 6) are inactive at the optimal solutions of both problems Tables 3 and 4 report the numerical results under these two choices and lead to the conclusion that it is recommended to start with the feasible solution given by the heuristic procedure It is also important to add that Formulation NLP1 is preferable if an active-set method such as MINOS is employed to process the nonlinear program The reason for this arises from the more complex formulas of the objective function in the Formulation NLP2 Example 1 Initial Solution Formulations By default (MINOS) Heuristic Solution Iterations CPU Cost Iterations CPU Cost NLP c c NLP c c Table 3: Comparative performance of the formulations NLP1 and NLP2 for Example 1, solved by GAMS/MINOS The implementation of the Projected Gradient algorithm for problem NLP2 requires a choice for the parameters β, θ and η used by the procedure For both examples, some tests 16
17 Example 2 Initial Solution Formulations By default (MINOS) Heuristic Solution Iterations CPU Cost Iterations CPU Cost NLP c c NLP c c Table 4: Comparative performance of the formulations NLP1 and NLP2 for Example 2, solved by GAMS/MINOS have been performed with the algorithm in order to make a sensitivity analysis for these parameters The best choices for the Projected Gradient algorithm have been achieved with β = 10 4, θ = 1 2, η = 10 2 for Example 1 and β = 10 4 (or β = 10 2 ), θ = 1 10, η = 10 3 for Example 2 Table 5 indicates the performance of the Projected Gradient algorithm (with tol = 10 6 ) for the best choice of the parameters and for two initial solutions These computational results show that the Projected Gradient algorithm is efficient to process the nonlinear program NLP2 The number of iterations required by the projected gradient method is slightly bigger than those of the active-set method (MINOS) displayed in tables 3 and 4 However, the algorithm performs faster than MINOS Another interesting conclusion is that the use of the initial solution given by the heuristic procedure does not seem to help for this method As the Projected Gradient algorithm fully exploits the structure of the nonlinear program and requires minimal storage, we believe that this method can be useful for the solution of larger instances of this optimization model Example 1 Initial Solution Iterations CPU Cost z i = b 0 40, i = 1, 2,, c Heuristic Solution c Example 2 Initial Solution Iterations CPU Cost z i = b 0 40, i = 1, 2,, c Heuristic Solution c Table 5: Best performances of the Projected Gradient algorithm The unique optimal solution found by the algorithms gives about 3% savings for Example 1 and about 12% savings for Example 2, when compared with the heuristic solutions used in 17
18 practice in the project reported in this paper We believe that more substantial savings can be achieved for larger instances of the energy model if the unique optimal solution of NLP1 or NLP2 is used instead of the solution that is nowadays employed in this project 8 Conclusion This paper describes formulations and solutions of an engineering optimization problem in telecommunications area, that consists of optimizing the cross section of conducting energy cables, used to supply remote telecom equipments, in order to minimize the volume of copper material used in the cables and consequently the cost Two alternative formulations for the problem have been introduced and a Projected Gradient algorithm was proposed for the second formulation, taking advantage of its particular structure Computational experience with an implementation of this algorithm has shown that the proposed methodology is efficient in practice We believe that this type of model and the Projected Gradient algorithm can also be useful in other engineering models used in energy supply, piping and fluid distribution with a similar structure, particularly when the number of nodes is quite large References [1] Victor Anunciada Cost minimization of a multiple section energy cable supplying remote telecom equipments Technical report, Instituto de Telecomunicações, Portugal, 2003 [2] M Bazaraa, H Sherali, and C Shetty Nonlinear Programming: Theory and Algorithms John Wiley and Sons, New York, 1993 [3] D Bertsekas Nonlinear Programming Athena Scientific, Massachusetts, 1999 [4] D P Bertsekas On the Goldstein-Levitin-Polyak gradient projection method IEEE Transactions on Automatic Control, 21: , 1976 [5] E G Birgin, J M Martinez, and M Raydan Nonmonotone spectral projected gradient methods on convex sets SIAM Journal on Optimization, 10: , 2000 [6] R W Cottle, J S Pang, and R E Stone The Linear Complementarity Problem Academic Press, New York, 1992 [7] Raymond DeCarlo Linear Circuit Analysis second ed, Pen-Lin Min, Oxford, 2001 [8] J E Dennis and R B Schnabel Numerical Methods for Unconstrained Optimization and Nonlinear Equations SIAM, Philadelphia, 1996 [9] J P Dussault, J A Ferland, and B Lemaire Convex quadratic programming with one constraint and bounded variables Mathematical Programming, 36:90 104,
19 [10] R P Feynman, R B Leighton, and M Sands The Feynman Lectures On Physics Addison-Wesley Publishing Company, New York, 1963 [11] Maximilian E Hehl Linguagem de Programação Estruturada FORTRAN 77 McGraw- Hill, São Paulo, 1987 [12] J J Júdice and F M Pires Solution of large-scale separable strictly convex quadratic programs on the simplex Linear Algebra and its Applications, 170: , 1992 [13] B Martos Nonlinear Programming: Theory and Methods North Holland, New York, 1975 [14] B A Murtagh and M A Saunders MINOS 51 user guide Technical report, Department of Operations Research, Stanford University, 1987 [15] P M Pardalos and N Kovoor An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds Mathematical Programming, 46: ,
An investigation of feasible descent algorithms for estimating the condition number of a matrix
Top (2012) 20:791 809 DOI 10.1007/s11750-010-0161-9 ORIGINAL PAPER An investigation of feasible descent algorithms for estimating the condition number of a matrix Carmo P. Brás William W. Hager Joaquim
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More informationSolution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP
Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP L. Fernandes A. Friedlander M. Guedes J. Júdice Abstract This paper addresses
More informationOn the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm
Numer Algor (2008) 47:391 407 DOI 10.1007/s11075-008-9194-7 ORIGINAL PAPER On the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm Joaquim J. Júdice
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationTechnische Universität Ilmenau Institut für Mathematik
Technische Universität Ilmenau Institut für Mathematik Preprint No. M 14/05 Copositivity tests based on the linear complementarity problem Carmo Bras, Gabriele Eichfelder and Joaquim Judice 28. August
More informationLinear Complementarity as Absolute Value Equation Solution
Linear Complementarity as Absolute Value Equation Solution Olvi L. Mangasarian Abstract We consider the linear complementarity problem (LCP): Mz + q 0, z 0, z (Mz + q) = 0 as an absolute value equation
More informationCubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization J. M. Martínez M. Raydan November 15, 2015 Abstract In a recent paper we introduced a trust-region
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationOn the Asymmetric Eigenvalue Complementarity Problem
On the Asymmetric Eigenvalue Complementarity Problem Joaquim J. Júdice E-Mail: joaquim.judice@co.it.pt Hanif D. Sherali E-Mail: hanifs@vt.edu Isabel M. Ribeiro E-mail: iribeiro@fe.up.pt Silvério S. Rosa
More informationSome Inexact Hybrid Proximal Augmented Lagrangian Algorithms
Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationOn the convergence properties of the projected gradient method for convex optimization
Computational and Applied Mathematics Vol. 22, N. 1, pp. 37 52, 2003 Copyright 2003 SBMAC On the convergence properties of the projected gradient method for convex optimization A. N. IUSEM* Instituto de
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationConvex Quadratic Approximation
Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,
More informationAbsolute Value Programming
O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A
More informationA Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction
A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More information4 damped (modified) Newton methods
4 damped (modified) Newton methods 4.1 damped Newton method Exercise 4.1 Determine with the damped Newton method the unique real zero x of the real valued function of one variable f(x) = x 3 +x 2 using
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationA DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS
Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2017 Notes 10: MINOS Part 1: the Reduced-Gradient
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.
ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationRevised Simplex Method
DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationNumerical Methods. V. Leclère May 15, x R n
Numerical Methods V. Leclère May 15, 2018 1 Some optimization algorithms Consider the unconstrained optimization problem min f(x). (1) x R n A descent direction algorithm is an algorithm that construct
More informationStep-size Estimation for Unconstrained Optimization Methods
Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations
More informationLINEAR variational inequality (LVI) is to find
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 6, NOVEMBER 2007 1697 Solving Generally Constrained Generalized Linear Variational Inequalities Using the General Projection Neural Networks Xiaolin Hu,
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationA Spring-Embedding Approach for the Facility Layout Problem
A Spring-Embedding Approach for the Facility Layout Problem Ignacio Castillo Thaddeus Sim Department of Finance and Management Science University of Alberta School of Business Edmonton, Alberta, Canada
More informationThe Second-Order Cone Quadratic Eigenvalue Complementarity Problem. Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D.
The Second-Order Cone Quadratic Eigenvalue Complementarity Problem Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D. Sherali Abstract: We investigate the solution of the Second-Order Cone
More information1.The anisotropic plate model
Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationarxiv: v1 [math.oc] 1 Jul 2016
Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the
More informationGAMS/MINOS MINOS 1 GAMS/MINOS. Table of Contents:
GAMS/MINOS MINOS 1 GAMS/MINOS Table of Contents:... 1. INTRODUCTION... 2 2. HOW TO RUN A MODEL WITH GAMS/MINOS... 2 3. OVERVIEW OF GAMS/MINOS... 2 3.1. Linear programming... 3 3.2. Problems with a Nonlinear
More informationReduced-Hessian Methods for Constrained Optimization
Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationAccelerated Gradient Methods for Constrained Image Deblurring
Accelerated Gradient Methods for Constrained Image Deblurring S Bonettini 1, R Zanella 2, L Zanni 2, M Bertero 3 1 Dipartimento di Matematica, Università di Ferrara, Via Saragat 1, Building B, I-44100
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationTutorial 2: Modelling Transmission
Tutorial 2: Modelling Transmission In our previous example the load and generation were at the same bus. In this tutorial we will see how to model the transmission of power from one bus to another. The
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationUnconstrained Optimization
1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation
More informationOn sequential optimality conditions for constrained optimization. José Mario Martínez martinez
On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationSplitting methods for the Eigenvalue Complementarity Problem
Splitting methods for the Eigenvalue Complementarity Problem Alfredo N. Iusem Joaquim J. Júdice Valentina Sessa Paula Sarabando January 25, 2018 Abstract We study splitting methods for solving the Eigenvalue
More informationELECTRICAL AND THERMAL DESIGN OF UMBILICAL CABLE
ELECTRICAL AND THERMAL DESIGN OF UMBILICAL CABLE Derek SHACKLETON, Oceaneering Multiflex UK, (Scotland), DShackleton@oceaneering.com Luciana ABIB, Marine Production Systems do Brasil, (Brazil), LAbib@oceaneering.com
More informationA Computer Application for Power System Control Studies
A Computer Application for Power System Control Studies Dinis C. A. Bucho Student nº55262 of Instituto Superior Técnico Technical University of Lisbon Lisbon, Portugal Abstract - This thesis presents studies
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More informationResidual iterative schemes for largescale linear systems
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz
More informationA FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION
U.P.B. Sci. Bull., Series A, Vol. 78, Iss. 1, 2016 ISSN 1223-7027 A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION Hamid Esmaeili 1, Mahdi Mirzapour 2, Ebrahim Mahmoodabadi
More informationDC circuits, Kirchhoff s Laws
DC circuits, Kirchhoff s Laws Alternating Current (AC), Direct Current (DC) DC Circuits Resistors Kirchhoff s Laws CHM6158C - Lecture 2 1 Electric current Movement of electrons in a conductor Examples
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More informationNode-based Distributed Optimal Control of Wireless Networks
Node-based Distributed Optimal Control of Wireless Networks CISS March 2006 Edmund M. Yeh Department of Electrical Engineering Yale University Joint work with Yufang Xi Main Results Unified framework for
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationA Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme
A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov
More informationAugmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification
Mathematical Programming manuscript No. will be inserted by the editor) R. Andreani E. G. Birgin J. M. Martínez M. L. Schuverdt Augmented Lagrangian methods under the Constant Positive Linear Dependence
More informationLIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS
LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS Napsu Karmitsa 1 Marko M. Mäkelä 2 Department of Mathematics, University of Turku, FI-20014 Turku,
More informationCOMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +
Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.
More informationAlvaro F. M. Azevedo A. Adão da Fonseca
SECOND-ORDER SHAPE OPTIMIZATION OF A STEEL BRIDGE Alvaro F. M. Azevedo A. Adão da Fonseca Faculty of Engineering University of Porto Portugal 16-18 March 1999 OPTI 99 Orlando - Florida - USA 1 PROBLEM
More informationIntroduction to Operations Research
Introduction to Operations Research (Week 5: Linear Programming: More on Simplex) José Rui Figueira Instituto Superior Técnico Universidade de Lisboa (figueira@tecnico.ulisboa.pt) March 14-15, 2016 This
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationModule 3 : Sequence Components and Fault Analysis
Module 3 : Sequence Components and Fault Analysis Lecture 13 : Sequence Modeling (Tutorial) Objectives In this lecture we will solve tutorial problems on fault analysis in sequence domain Per unit values
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationConvergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize
Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate
More informationA new nonmonotone Newton s modification for unconstrained Optimization
A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationLecture #3. Review: Power
Lecture #3 OUTLINE Power calculations Circuit elements Voltage and current sources Electrical resistance (Ohm s law) Kirchhoff s laws Reading Chapter 2 Lecture 3, Slide 1 Review: Power If an element is
More informationDM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini
DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation
More informationAdaptive two-point stepsize gradient algorithm
Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationOn the solution of the monotone and nonmonotone Linear Complementarity Problem by an infeasible interior-point algorithm
On the solution of the monotone and nonmonotone Linear Complementarity Problem by an infeasible interior-point algorithm J. Júdice L. Fernandes A. Lima Abstract The use of an Infeasible Interior-Point
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,
More information1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that
SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.
More informationSteepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720
Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationFirst Published on: 11 October 2006 To link to this article: DOI: / URL:
his article was downloaded by:[universitetsbiblioteet i Bergen] [Universitetsbiblioteet i Bergen] On: 12 March 2007 Access Details: [subscription number 768372013] Publisher: aylor & Francis Informa Ltd
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More information