Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Size: px
Start display at page:

Download "Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables"

Transcription

1 Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad Bulgaria Astract In this paper, we consider the prolem of minimizing a convex separale exponential function over a region defined y a linear equality constraint and ounds on the variales. Such prolems are interesting from oth theoretical and practical point of view ecause they arise in some mathematical programming prolems as well as in various practical prolems. Polynomial algorithms are proposed for solving prolems of this form and their convergence is proved. Some examples and results of numerical experiments are also presented. Keywords and phrases : Exponential function, convex programming, separale programming, polynomial algorithms, computational complexity. 1. Introduction Consider the following convex separale program with an exponential oective function, linear equality constraint and ounded variales } (CSE) min suect to c(x) c (x ) s (e m x 1) x = α (2) (1) a x, J, (3) where s > 0, m > 0, > 0, J, x = (x ), and J def = 1,..., n}. stefm@aix.swu.g Journal of Interdisciplinary Mathematics Vol. 9 (2006), No. 1, pp c Taru Pulications

2 208 S. M. STEFANOV Since c (x ) = s m 2 e m x > 0, then c (x ), J, are strictly convex functions, and since c (x ) = s m e m x < 0 under the assumptions, then functions c (x ), J, are decreasing. Also, we can consider the convex exponential separale program with a linear equality constraint and ounded variales, which is similar to prolem (1)-(3) (CESP) min suect to } c(x) c (x ) e k x (4) x = α (5) a x, J, (6) where k > 0, > 0, J. Since c (x ) = k 2 ek x > 0 then c (x ) are strictly convex functions, and since c (x ) = k e k x > 0 under the assumptions, then functions c (x ), J, are increasing. Prolems (CSE) and (CESP) are convex separale programming prolems ecause the oective functions and constraint functions are convex and separale. Prolems (CSE) and (CESP), defined y (1)-(3) and (4)-(6), respectively, arise in production planning and scheduling, in allocation of resources, in the theory of search, in sugradient optimization, in facility location ([1], [4], [5], [6], [8], [10]), etc. Prolems like (CSE) and (CESP) and related to them are suect of intensive study. Related prolems and methods for them are considered in [1]-[10]. Algorithms for resource allocation prolems are proposed in [1], [4], [5], [10], and algorithms for facility location prolems are suggested in [6], [8], etc. Singly constrained quadratic programs with ounded variales are considered in [2] and [3], and some separale programs are considered and methods for solving them are suggested in [7], [8], etc. This paper is devoted to development of new efficient polynomial algorithms for solving prolems (CSE) and (CESP). The paper is organized as follows. In section 2, characterization theorems (necessary and sufficient conditions) for the optimal solutions to the considered prolems are proved. In section 3, new algorithms of polynomial complexity are suggested and their convergence is proved. In section 4 we consider some theoretical and numerical aspects of implementation of the algorithms and

3 CONVEX SEPARABLE EXPONENTIAL FUNCTION 209 give some extensions of oth characterization theorems and algorithms. In section 5 we present results of some numerical experiments. 2. Characterization theorems 2.1 Prolem (CSE) First consider prolem (CSE) defined y (1)-(3). Suppose that following assumptions are satisfied. (1.a) a for all J. If a k = k for some k J then the value x k := a k = k is determined in advance. (1.) a α. Otherwise the constraints (2)-(3) are inconsistent and X = where X is defined y (2)-(3). The Lagrangian for prolem (CSE) is L(x, u, v, λ) = s (e m x 1) + λ ( ) x α + u (a x ) + v (x ), where λ R 1 ; u, v R n +, and Rn + consists of all vectors with n real nonnegative components. The Karush-Kuhn-Tucker (KKT) necessary and sufficient optimality conditions for the minimum solution x = (x ) are s m e m x + λ u + v = 0, J (7) u (a x ) = 0, J (8) v (x ) = 0, J (9) x = α (10) a x, J (11) u R 1 +, v R 1 +, J, (12) where λ, u, v, J, are the Lagrange multipliers associated with the constraints (2), a x, x, J, respectively. If a = or = + for some, we do not consider the corresponding condition (8) [(9)] and Lagrange multiplier u [v, respectively].

4 210 S. M. STEFANOV Since u 0, v 0, J, and since the complementary conditions (8), (9) must e satisfied, in order to find x, J, from system (7)-(12), we have to consider all possile cases for u, v : all u, v equal to 0; all u, v different from 0; some of them equal to 0 and some of them different from 0. The numer of these cases is 2 2n where 2n is the numer of all u, v, J, J = n. This is an enormous numer of cases, especially for large-scale prolems. For example, when n = 1500, we have to consider cases. Moreover, in each case we have to solve a largescale system of (nonlinear) equations in x, λ, u, v, J. Therefore the direct application of the Karush-Kuhn-Tucker (KKT) theorem, using explicit enumeration of all possile cases, for solving large-scale prolems of the considered form would not give a result and we need efficient methods to solve prolems under consideration. The following Theorem 1 gives a characterization of the optimal solution to prolem (CSE). Its proof, of course, is ased on the Karush- Kuhn-Tucker theorem. As we will see in section 5, y using Theorem 1 we can solve prolem (CSE) with n = 1500 variales for a ten-thousandth of a second on a personal computer. Theorem 1 (Characterization of the optimal solution to prolem (CSE)).A feasile solution x = (x ) X (2)-(3) is the optimal solution to prolem (CSE) if and only if there exists some λ R 1 such that } x = a, x =, J λ a J λ def = x = ln(s m ) ln(λ ), m J λ def = J : λ s m e m a def = J : λ s m e m J : s m e m } (13) (14) < λ < s m e m } a. (15) We will show elow that λ > 0, so that the expressions of x, Jλ, in (15) (especially expressions under the sign of logarithm) are correct. Proof. Necessity. Let x = (x ) e the optimal solution to (CSE). Then there exist constants λ, u, v, J, such that KKT conditions (7)-(12) are satisfied.

5 CONVEX SEPARABLE EXPONENTIAL FUNCTION 211 (a) If x = a then u 0 and v = 0 according to (9). Therefore (7) implies s m e m x = u λ λ. Since > 0, J, then λ s m e m x s m e m a. () If x = then u = 0 according to (8) and v 0. Therefore (7) implies s m e m x = v λ λ. Hence λ s m e m x s m e m. (c) If a < x < then u = v = 0 according to (8) and (9). Therefore (7) implies s m e m x = λ. Hence λ = s m e m x, and x = ln(s m ) ln(λ ). Since d m > 0, J, y the assumption and > x, x > a, it follows that λ = s m e m x λ = s m e m x > s m e m, that is, < s m e m a, s m e m < λ < s m e m a. In particular, if we assume that λ = 0, since s > 0, m > 0, > 0, then oviously Ja λ=0 = J λ=0 = and J = J λ=0. Similarly, if we assume that λ < 0, since s > 0, m > 0, > 0, then Ja λ = J λ = and J = J λ. To descrie cases (a), (), (c), it is convenient to introduce the index sets J λ a, J λ, Jλ defined y (13), (14) and (15), respectively. It is ovious that J λ a J λ Jλ = J. The necessity part is proved. Sufficiency. Conversely, let x X and components of x satisfy (13), (14) and (15), where λ R 1. Set: λ = s m e m x otained from λ a u = v = 0 for J λ ; a + + λ ln(s m ) ln(λ ) =α; m λ

6 212 S. M. STEFANOV u u = s m e m a + λ ( 0 according to the definition of Ja λ ), v = 0 for Ja λ ; = 0, v = s m e m λ ( 0 according to the definition of J λ) for Jλ. By using these expressions, it is easy to check that conditions (7), (8), (9), (12) are satisfied; conditions (10) and (11) are also satisfied according to the assumption x X. We have proved that x, λ, u, v, J, satisfy KKT conditions (7)- (12) which are necessary and sufficient conditions for a feasile solution to e an optimal solution to a convex minimization prolem. Therefore x is an optimal solution to prolem (CSE), and since c(x) is strictly convex then this optimal solution is unique. In view of the discussion aove, the importance of Theorem 1 consists in the fact that it descries components of the optimal solution to prolem (CSE) only through the Lagrange multiplier λ associated with the equality constraint (2). Since we do not know the optimal value of λ from Theorem 1, we define an iterative process with respect to the Lagrange multiplier λ and we prove convergence of this process in section 3 The algorithms. From > 0, s > 0, m > 0 and a, J, it follows that u def = s m e m s m e m a def = la, J for the expressions y which we define the sets J λ a, J λ, Jλ. The prolem how to ensure a feasile solution to prolem (CSE), which is an assumption of Theorem 1, is discussed in susection Prolem (CESP) Consider the convex exponential separale program with a linear equality constraint and ox constraints (CESP) (4)-(6). Assumptions: (2.a) a for all J. (2.) a α. Otherwise the constraints (5)-(6) are inconsistent and the feasile region X, defined y (5)-(6), is empty.

7 CONVEX SEPARABLE EXPONENTIAL FUNCTION 213 The KKT conditions for prolem (CESP) are k e k x + λ u + v = 0, u (a x ) = 0, v (x ) = 0, x = α a x, J J J J u R 1 +, v R 1 +, J. In this case, the following Theorem 2, which is similar to Theorem 1, holds true. Theorem 2 (Characterization of the optimal solution to prolem (CESP)). A feasile solution x = (x ) X (5)-(6) is the optimal solution to prolem (CESP) if and only if there exists some λ R 1 such that x = a, Ja λ def = J : λ k e k } a x =, x = 1 k ln J λ ( λ k J λ def = def = J : λ k e k ), J : k e k } (16) (17) < λ < k e k } a. (18) As we will show elow, λ < 0, so that the expressions of x, Jλ, in (18) (especially expressions under the sign of logarithm) are correct. The proof of Theorem 2 is omitted ecause it is similar to that of Theorem The algorithms 3.1 Analysis of the optimal solution to prolem (CSE) Before the formal statement of the algorithm for prolem (CSE), we discuss some properties of the optimal solution to this prolem which turn out to e useful.

8 214 S. M. STEFANOV Using (13), (14) and (15), condition (10) can e written as follows: ln(s m ) ln(λ ) a + + = α. (10 ) m a λ λ λ Since the optimal solution x to prolem (CSE) depends on λ, we consider components of x as functions of λ for different λ R 1 : a, Ja λ x = x (λ) =, J λ (19) ln(s m ) ln(λ ), J λ. m Functions x (λ), J, are piecewise linear, monotone nonincreasing, piecewise differentiale functions of λ with two reakpoints at λ = s m e m a and λ = s m e m. Let δ(λ) def ln(s m ) ln(λ ) = a + + α. (20) m a λ λ λ If we differentiate δ(λ) with respect to λ, we get δ (λ) 1 λ < 0, (21) m λ according to the remark (after statement of Theorem 1), that λ > 0, when J λ =, and δ (λ) = 0 when J λ =. Hence δ(λ) is a monotone nonincreasing function of λ R 1. From the equation δ(λ) = 0, where δ(λ) is defined y (20), we are ale to otain a closed form expression for λ [ ] d 1 [ ]} λ = exp, (22) m λ a + + ln s m α m a λ λ λ ecause δ (λ) < 0 according to (21) when J λ = (it is important that δ (λ) = 0). This expression of λ shows that λ > 0, and it is used in the algorithm suggested for prolem (CSE). It turns out that without loss of generality we can assume that δ (λ) = 0, that is, δ(λ) depends on λ, which means that J λ =. At iteration k of the implementation of the algorithms, denote y

9 CONVEX SEPARABLE EXPONENTIAL FUNCTION 215 λ (k) the value of Lagrange multiplier associated with constraint (2) [(5), respectively], y α (k) the right-hand side of (2) [of (5), respectively]; y J (k), J α, J, J the current sets J, Ja λ, J λ, Jλ, respectively. 3.2 Algorithm 1 ( for prolem (CSE)) The following algorithm for solving prolem (CSE) is ased on Theorem 1. Algorithm 1 ( for prolem (CSE)) 1. (Initialization) J := 1,..., n}, k := 0, α (0) := α, n (0) := n, J (0) := J, Ja λ :=, J λ :=. If a α, go to 2 else go to J := J (k). Calculate λ (k) y using the explicit expression (22) of λ. Go to Construct the sets Ja instead of J) and find their cardinal numers Ja respectively. Go to Calculate δ(λ (k) ) := Go to 5. a, J, J through (13), (14), (15) (with J (k), J, J, ln(s m ) ln(λ a + + (k) ) α (k). m 5. If δ(λ (k) ) = 0 or J = then λ := λ (k), J λ a := J λ a J a, J λ := J λ J, J λ := J, go to 8 else if δ(λ (k) ) > 0 go to 6 else if δ(λ (k) ) < 0 go to x := a for Ja, α (k+1) := α (k) a a, J (k+1) := J (k) \ Ja, n (k+1) := n (k) Ja, Ja λ := Ja λ Ja, k := k + 1. Go to x := for J, α (k+1) := α (k), J (k+1) := J (k) \ J, n (k+1) := n (k) J, J λ := Jλ J, k := k + 1. Go to x := a for Ja λ ; x := for J λ ; x := ln(s m ) ln(λ ) for J λ. Go to 10. m

10 216 S. M. STEFANOV 9. Prolem (CSE) has no optimal solution ecause the feasile set X (2)-(3) is empty. 10. End. 3.3 Convergence and complexity of Algorithm 1 The following Theorem 3 states convergence of Algorithm 1. Theorem 3. Let λ (k) } e the sequence generated y Algorithm 1. Then (i) if δ(λ (k) ) > 0 then λ (k) λ (k+1) ; (ii) if δ(λ (k) ) < 0 then λ (k) λ (k+1). Proof. Denote y x (k) the components of x (k) = (x ) (k) at iteration k of implementation of Algorithm 1. (i) Let δ(λ (k) ) > 0. Using Step 6 of Algorithm 1 (which is performed when δ(λ (k) ) > 0) we get x (k) x (k) = x (k) λ(k+1) (k+1) (k) \Ja = α (k) x (k). (23) a Let Ja. According to definition (13) of Ja we have s m e m a Multiplying this inequality y λ (k) = s m m e x (k). s m e m a > 0 we otain 1 e m x (k) e m a e m (a x (k) ). Therefore x (k) a ecause m > 0 and according to properties of the exponential function. From (23), using that > 0, a x (k) x (k) = α (k) x (k) λ(k+1) a, Ja, and Step 6, we get α (k) a = α (k+1) = x (k+1). λ(k+1) a

11 CONVEX SEPARABLE EXPONENTIAL FUNCTION 217 Since > 0, J, then there exists at least one 0 J λ(k+1) such that x (k) 0 x (k+1). Then 0 λ (k) = s 0 m 0 e m (k) 0 x 0 0 s m 0 m 0 e 0 x (k+1) 0 = λ (k+1). 0 We have used that the relationship etween λ (k) and x (k) is given y (15) for J according to Step 2 of Algorithm 1, and > 0, s > 0, m > 0, J. The proof of part (ii) is omitted ecause it is similar to that of part (i). Consider the feasiility of x = (x ), generated y Algorithm 1. Components x = a, Ja λ, and x =, J λ, oviously satisfy (3). From s m e m < λ s m m e x < s m e m a, J λ, and > 0, s > 0, m > 0, J, it follows that a < x < for J λ. Hence all x, J, satisfy (3). Since at each iteration λ (k) is determined from the current equality constraint (2) (Step 2 of Algorithm 1) and since x, J, are determined in accordance with λ (k) at each iteration (Steps 5, 6, 7, 8 of Algorithm 1) then x satisfies (2) as well. Therefore Algorithm 1 generates x which is feasile for prolem (CSE), which is an assumption of Theorem 1. Remark 1. Theorem 3, definitions of Ja λ (13), J λ (14) and Jλ (15), and Steps 6, 7 and 8 of Algorithm 1 allow us to state that Ja J J λ(k+1), and J J λ(k+1). This means that if elongs to current set J a then elongs to the next index set J λ(k+1) a J λ(k+1) a, and, therefore, to the optimal index set Ja λ ; the same holds true aout the sets J and J λ. Therefore converges to the optimal λ of Theorem 1 and Ja, J, J converge to the optimal index sets Ja λ, J λ, Jλ, respectively. This means that calculation of λ, operations x := a, Ja (Step 6), x :=, J (Step 7) and the construction of Ja λ, J λ, Jλ are in accordance with Theorem 1.

12 218 S. M. STEFANOV At each iteration of Algorithm 1, we determine the value of at least one variale (Steps 6, 7, 8) and at each iteration we solve a prolem of the form (CSE) ut of less dimension (Steps 2-7). Therefore Algorithm 1 is finite and it converges with at most n = J iterations, that is, the iteration complexity of Algorithm 1 is O(n). Step 1 (initialization and checking whether X is empty) takes time O(n). The calculation of λ (k) requires constant time (Step 2). Step 3 takes O(n) time ecause of the construction of J a, J, J. Step 4 also requires O(n) time and Step 5 requires constant time. Each of Steps 6, 7 and 8 takes time which is ounded y O(n) ecause at these steps we assign some of x the final value, and since the numer of all x s is n then Steps 6, 7 and 8 take time O(n). Hence Algorithm 1 has O(n 2 ) running time and it elongs to the class of strongly polynomially ounded algorithms. As the computational experiments show, the numer of iterations of the algorithm performance is not only at most n ut it is much, much less than n for large n. In fact, this numer does not depend on n ut only on the three index sets defined y (13), (14), (15). In practice, Algorithm 1 has O(n) running time. 3.4 Algorithm 2 ( for prolem (CESP)) and its convergence After analysis of the optimal solution to prolem (CESP), similar to that to prolem (CSE), we suggest the following algorithm for solving prolem (CESP). Algorithm 2 ( for prolem (CESP)) 1. (Initialization) J := 1,..., n}, k := 0, α (0) := α, n (0) := n, J (0) := J, Ja λ :=, J λ :=. If a α, go to 2 else go to J := J (k). Calculate λ (k) y using the explicit expression Go to 3. [ λ (k) = exp k ] 1 [ α (k) ln d ]} k k 3. Construct the sets Ja instead of J) and find their cardinal numers Ja Go to 4. (< 0)., J, J through (16), (17), (18) (with J (k), J, J.

13 CONVEX SEPARABLE EXPONENTIAL FUNCTION Calculate Go to 5. δ(λ (k) ) := a + a + + ln( λ (k) ) (ln ln k ) k α (k). k 5. If δ(λ (k) ) = 0 or J = then λ := λ (k), J λ a := J λ a J a, J λ := J λ J, J λ := J, go to 8 else if δ(λ (k) ) > 0 go to 6 else if δ(λ (k) ) < 0 go to x := a for Ja, α (k+1) := α (k) a a, J (k+1) := J (k) \ Ja, n (k+1) := n (k) Ja, Ja λ := Ja λ Ja, k := k + 1. Go to x := for J, α (k+1) := α (k), J (k+1) := J (k) \ J, n (k+1) := n (k) J, J λ := Jλ J, k := k + 1. Go to 2. ( ) 8. x := a for Ja λ ; x := for J λ ; x := 1 ln λ k k for J λ. Go to Prolem (CESP) has no optimal solution ecause the feasile set X (5)- (6) is empty. 10. End. To avoid a possile endless loop in programming Algorithms 1 and 2, the criterion of Step 5 to go to Step 8 at iteration k usually is not δ(λ (k) ) = 0 ut δ(λ (k) ) [ ε,ε] where ε > 0 is some (given or chosen) tolerance value up to which the equality δ(λ) = 0 must e satisfied. A theorem analogous to Theorem 3 holds for Algorithm 2 which guarantees the convergence of λ (k), J, Ja, J to the optimal λ, J λ, Ja λ, J λ, respectively. Theorem 4. Let λ (k) } e the sequence generated y Algorithm 2. Then (i) if δ(λ (k) ) > 0 then λ (k) λ (k+1) ; (ii) if δ(λ (k) ) < 0 then λ (k) λ (k+1).

14 220 S. M. STEFANOV The proof of Theorem 4 is omitted ecause it is similar to that of Theorem 3. It can e proved that Algorithm 2 has O(n 2 ) running time, and point x = (x ) generated y this algorithm is feasile for prolem (CESP), which is an assumption of Theorem Extensions 4.1 Theoretical aspects Up to now we required > 0, J, in (2) and (5) of prolems (CSE) and (CESP), respectively. However, if it is allowed = 0 for some in prolems (CSE) and (CESP) then for such indices we cannot construct the expressions s m e m a and k e k a and k e k and s m e m for prolem (CSE), for prolem (CESP), y means of which we define sets J λ a, J λ, Jλ for the corresponding prolem. In such cases, x s are not involved in (2) [in (5), respectively] for such indices. It turns out that we can cope with this difficulty and solve prolems (CSE) and (CESP) with = 0 for some s. Denote Z0 = J : = 0}. Here 0 means the computer zero. In particular, when J = Z0 and α = 0 then the set X is defined only y (3) (y (6), respectively). Theorem 5 (Characterization of the optimal solution to prolem (CSE): an extended version). Prolem (CSE) can e decomposed into two suprolems: (CSE1) for Z0 and (CSE2) for J \ Z0. The optimal solution to (CSE1) is x =, Z0, (24) that is, suprolem (CSE1) itself is decomposed into n 0 Z0 independent prolems. The optimal solution to (CSE2) is given y (13), (14), (15) with J := J \ Z0. Proof. Necessity. Let x = (x ) e the optimal solution to (CSE). (1) Let Z0, that is, = 0 for this. The KKT conditions are

15 CONVEX SEPARABLE EXPONENTIAL FUNCTION 221 s m e m x u + v = 0, Z0 from (7) (7 ) and (8)-(12). (a) If x = a, then u 0, v = 0. From (7 ) it follows that s m e m x = u 0, which is impossile ecause s > 0, m > 0 and e m x > 0. () If x =, then u = 0, v 0. Therefore s m e m x = v 0, which is always satisfied for s > 0, m > 0. (c) If a < x <, then u = v = 0. Therefore s m e m x = 0, that is, s m = 0, which is impossile according to the assumption s > 0, m > 0. As we have oserved, only case () is possile for Z0, and x =, Z0. (2) Components of the optimal solution to (CSE2) are otained y using the same approach as that of the proof of necessity part of Theorem 1, ut with the reduced index set J := J \ Z0. Sufficiency. Conversely, let x X and components of x satisfy: (24) for Z0, and (13), (14), (15) with J := J \ Z0. Set: If λ = 0, set: u = 0, v = s m e m (> 0) for Z0. λ = s m e m x = λ(x ) from (15); u = v = 0 for a < x <, J \ Z0; u = s m e m a + λ ( 0), v = 0 for x = a, J \ Z0; u = 0, v = s m e m λ ( 0) for x =, J \ Z0. As in the proof of Theorem 1, Ja λ=0 = J λ=0 =. It can e verified that x, λ, u, v, J, satisfy the KKT conditions (7 )-(12). Then x with components: (24) for Z0, and (13), (14), (15) for J := J \ Z0 is the optimal solution to prolem (CSE) = (CSE1) (CSE2). An analogous result holds for prolem (CESP).

16 222 S. M. STEFANOV Theorem 6 (Characterization of the optimal solution to prolem (CESP): an extended version). Prolem (CESP) can e decomposed into two suprolems: (CESP1) for Z0 and (CESP2) for J \ Z0. The optimal solution to (CESP1) is x = a, Z0. The optimal solution to (CESP2) is given y (16), (17), (18) with J := J \ Z0. The proof of Theorem 6 is omitted ecause it repeats in part the proofs of Theorem 1 and Theorem 5. Thus, with the use of Theorem 5 and Theorem 6 we can express components of the optimal solutions to prolems (CSE) and (CESP) without the necessity of constructing the expressions s m e m a s m e m,, k e k a and k e k with d = Computational aspects Algorithms 1 and 2 are also applicale in cases when a = for some J and/or = for some J. However, if we use the computer values of and + at the first step of the algorithms to check whether the corresponding feasile region is empty or nonempty and at Step 3 in the expressions s m e m x and k e k x with x = and/or x = +, y means of which we construct sets J λ a, J λ, Jλ, this could sometimes lead to arithmetic overflow. If we use other values of and + with smaller asolute values than those of the computer values of and +, it would lead to inconvenience and dependence on the data of the particular prolems. To avoid these difficulties and to take into account the aove discussion, it is convenient to do the following. Construct the sets of indices: SVN = J \ Z0 : a >, < + }, SV1 = J \ Z0 : a >, = + }, (25) SV2 = J \ Z0 : a =, < + }, SV = J \ Z0 : a =, = + }. It is ovious that Z0 SV SV1 SV2 SVN = J, that is, the set J \ Z0 is partitioned into the four susets SVN, SV1, SV2, SV, defined aove. When programming the algorithms, we use computer values of and + for constructing the sets SV N, SV1, SV2, SV.

17 CONVEX SEPARABLE EXPONENTIAL FUNCTION 223 In order to construct the sets J λ a, J λ, Jλ without the necessity of calculating the values s m e m x (for prolem (CSE)) with x = or +, except for the sets J, Z0, SV, SV1, SV2, SV N, we need some susidiary sets defined as follows. For SVN : for SV1: for SV2: for SV : Then: J λ SVN = J λ SVN a = J λ SVN = J λ SV1 = J λ SV1 a = J λ SV2 = J λ SV2 = J λ SV = SV. SVN : s m e m SVN : λ s m e m } a, SVN : λ s m e m } ; < λ < s m e m } a, SV1 : λ < s m e m } a, (26) SV1 : λ s m e m } a ; SV2 : λ > s m e m }, SV2 : λ s m e m } ; J λ := J λ SVN J λ SV1 J λ SV2 J λ SV, J λ a J λ := J λ SVN a SVN := Jλ Ja λ SV1, (27) J λ SV2. We use the sets J λ, Ja λ, J λ (27) as the corresponding sets with the same names in Algorithms 1 and 2.

18 224 S. M. STEFANOV With the use of results of this section, Steps 1 and 3 of Algorithm 1 can e modified as follows, respectively. Aout Algorithm 1. Step 1 1. (Initialization) J := 1,..., n}, k := 0, α (0) := α, n (0) := n, J (0) := J, Ja λ :=, J λ :=. Construct the set Z0. If Z0 then x :=. Set J := J \ Z0, J (0) := J, n (0) := n Z0. Construct the sets SVN, SV1, SV2, SV. If SVN = J then if a α then go to Step 2 else go to Step 9 (feasile region X is empty) else if SV1 SVN = J then if a α then go to Step 2 else go to Step 9 (feasile region X is empty) else if SV2 SVN = J then if α then go to Step 2 else go to Step 9 (feasile region X is empty) else if SV = then go to Step 2 (feasile region is always nonempty). Step 3 1. Construct the sets J λ SVN, Ja λ SVN, J λ SV (with J (k) instead of J). J λsv2 Construct the sets J a, J their cardinal numers Ja Step 4., J λ SVN, J λ SV1, J λ SV1 a, J λ SV2,, J y using (27) and find, J, J, respectively. Go to Similarly, we can define susidiary index sets of the form (26) for prolem (CESP) as well and modify Steps 1 and 3 of Algorithm 2. Modifications of the algorithms connected with theoretical and computational aspects do not influence upon their computational complexity, discussed in section 3, ecause these modifications do not affect the iterative steps of algorithms.

19 CONVEX SEPARABLE EXPONENTIAL FUNCTION Computational experiments In this section we present results of some numerical experiments, otained y applying algorithms, suggested in this paper, to prolems under consideration. The computations were performed on an Intel Pentium II Celeron Processor 466 MHz/128MB SDRAM IBM PC compatile. Each prolem was run 30 times. Coefficients s > 0, m > 0, > 0, J, for prolem (CSE) and k > 0, > 0, J, for prolem (CESP) were randomly generated. Prolem (CSE) (CESP) Numer of variales n = 1200 n = 1500 n = 1200 n = 1500 Average numer of iterations Average run time (in seconds) When n < 1200, the run time of the algorithms is so small that the timer does not recognize the corresponding value from its computer zero. In such cases the timer displays 0 seconds. The effectiveness of algorithms for prolems (CSE) and (CESP) has een tested y many other examples. As we can oserve, the (average) numer of iterations is much less than the numer of variales n for large n. References [1] G. R. Bitran and A. C. Hax, Disaggregation and resource allocation using convex knapsack prolems with ounded variales, Management Science, Vol. 27 (1981), pp [2] J.-P. Dussault, J. Ferland and B. Lemaire, Convex quadratic programming with one constraint and ounded variales, Mathematical Programming, Vol. 36 (1986), pp [3] R. Helgason, J. Kennington and H. Lall, A polynomially ounded algorithm for a singly constrained quadratic program, Mathematical Programming, Vol. 18 (1980), pp [4] N. Katoh, T. Iaraki and H. Mine, A polynomial time algorithm for the resource allocation prolem with a convex oective function, Journal of the Operations Research Society, Vol. 30 (1979), pp [5] H. Luss and S. K. Gupta, Allocation of effort resources among competing activities, Operations Research, Vol. 23 (1975), pp

20 226 S. M. STEFANOV [6] S. M. Stefanov, On the implementation of stochastic quasigradient methods to some facility location prolems, Yugoslav Journal of Operations Research, Vol. 10 (2) (2000), pp [7] S. M. Stefanov, Convex separale minimization suect to ounded variales, Computational Optimization and Applications. An International Journal, Vol. 18 (1) (2001), pp [8] S. M. Stefanov, Separale Programming. Theory and Methods, Kluwer Academic Pulishers, Dordrecht - Boston - London, [9] S. M. Stefanov, Convex separale minimization prolems with a linear constraint and ounds on the variales, in Applications of Mathematics in Engineering and Economics, Vol. 27, D. Ivanchev and M. D. Todorov (eds.), Heron Press, Sofia, 2002, [10] P.H. Zipkin, Simple ranking methods for allocation of one resource, Management Science, Vol. 26 (1980), pp Received April, 2005

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

A breakpoint search approach for convex resource allocation problems with bounded variables

A breakpoint search approach for convex resource allocation problems with bounded variables Optim Lett (2012) 6:629 640 DOI 10.1007/s11590-011-0288-0 ORIGINAL PAPER A breakpoint search approach for convex resource allocation problems with bounded variables Anja De Waegenaere Jacco L. Wielhouwer

More information

Real option valuation for reserve capacity

Real option valuation for reserve capacity Real option valuation for reserve capacity MORIARTY, JM; Palczewski, J doi:10.1016/j.ejor.2016.07.003 For additional information aout this pulication click this link. http://qmro.qmul.ac.uk/xmlui/handle/123456789/13838

More information

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA)

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA) ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA) General information Availale time: 2.5 hours (150 minutes).

More information

SVETLANA KATOK AND ILIE UGARCOVICI (Communicated by Jens Marklof)

SVETLANA KATOK AND ILIE UGARCOVICI (Communicated by Jens Marklof) JOURNAL OF MODERN DYNAMICS VOLUME 4, NO. 4, 010, 637 691 doi: 10.3934/jmd.010.4.637 STRUCTURE OF ATTRACTORS FOR (a, )-CONTINUED FRACTION TRANSFORMATIONS SVETLANA KATOK AND ILIE UGARCOVICI (Communicated

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Lecture 12: Grover s Algorithm

Lecture 12: Grover s Algorithm CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 12: Grover s Algorithm March 7, 2006 We have completed our study of Shor s factoring algorithm. The asic technique ehind Shor

More information

Scheduling Two Agents on a Single Machine: A Parameterized Analysis of NP-hard Problems

Scheduling Two Agents on a Single Machine: A Parameterized Analysis of NP-hard Problems Scheduling Two Agents on a Single Machine: A Parameterized Analysis of NP-hard Prolems Danny Hermelin 1, Judith-Madeleine Kuitza 2, Dvir Shatay 1, Nimrod Talmon 3, and Gerhard Woeginger 4 arxiv:1709.04161v1

More information

Merging and splitting endowments in object assignment problems

Merging and splitting endowments in object assignment problems Merging and splitting endowments in oject assignment prolems Nanyang Bu, Siwei Chen, and William Thomson April 26, 2012 1 Introduction We consider a group of agents, each endowed with a set of indivisile

More information

#A50 INTEGERS 14 (2014) ON RATS SEQUENCES IN GENERAL BASES

#A50 INTEGERS 14 (2014) ON RATS SEQUENCES IN GENERAL BASES #A50 INTEGERS 14 (014) ON RATS SEQUENCES IN GENERAL BASES Johann Thiel Dept. of Mathematics, New York City College of Technology, Brooklyn, New York jthiel@citytech.cuny.edu Received: 6/11/13, Revised:

More information

ERASMUS UNIVERSITY ROTTERDAM

ERASMUS UNIVERSITY ROTTERDAM Information concerning Colloquium doctum Mathematics level 2 for International Business Administration (IBA) and International Bachelor Economics & Business Economics (IBEB) General information ERASMUS

More information

2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oat

2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oat Chapter 1 CONSTRAINED GENETIC ALGORITHMS AND THEIR APPLICATIONS IN NONLINEAR CONSTRAINED OPTIMIZATION Benjamin W. Wah and Yi-Xin Chen Department of Electrical and Computer Engineering and the Coordinated

More information

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai Divide-and-Conquer Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, 33.4. CSE 6331 Algorithms Steve Lai Divide and Conquer Given an instance x of a prolem, the method works as follows: divide-and-conquer

More information

Critical value of the total debt in view of the debts. durations

Critical value of the total debt in view of the debts. durations Critical value of the total det in view of the dets durations I.A. Molotov, N.A. Ryaova N.V.Pushov Institute of Terrestrial Magnetism, the Ionosphere and Radio Wave Propagation, Russian Academy of Sciences,

More information

1 Hoeffding s Inequality

1 Hoeffding s Inequality Proailistic Method: Hoeffding s Inequality and Differential Privacy Lecturer: Huert Chan Date: 27 May 22 Hoeffding s Inequality. Approximate Counting y Random Sampling Suppose there is a ag containing

More information

MATH 225: Foundations of Higher Matheamatics. Dr. Morton. 3.4: Proof by Cases

MATH 225: Foundations of Higher Matheamatics. Dr. Morton. 3.4: Proof by Cases MATH 225: Foundations of Higher Matheamatics Dr. Morton 3.4: Proof y Cases Chapter 3 handout page 12 prolem 21: Prove that for all real values of y, the following inequality holds: 7 2y + 2 2y 5 7. You

More information

Upper Bounds for Stern s Diatomic Sequence and Related Sequences

Upper Bounds for Stern s Diatomic Sequence and Related Sequences Upper Bounds for Stern s Diatomic Sequence and Related Sequences Colin Defant Department of Mathematics University of Florida, U.S.A. cdefant@ufl.edu Sumitted: Jun 18, 01; Accepted: Oct, 016; Pulished:

More information

Pseudo-automata for generalized regular expressions

Pseudo-automata for generalized regular expressions Pseudo-automata for generalized regular expressions B. F. Melnikov A. A. Melnikova Astract In this paper we introduce a new formalism which is intended for representing a special extensions of finite automata.

More information

Single Peakedness and Giffen Demand

Single Peakedness and Giffen Demand Single Peakedness and Giffen Demand Massimiliano Landi January 2012 Paper No. 02-2012 ANY OPINIONS EXPRESSED ARE THOSE OF THE AUTHOR(S) AND NOT NECESSARILY THOSE OF THE SCHOOL OF ECONOMICS, SMU Single

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Modifying Shor s algorithm to compute short discrete logarithms

Modifying Shor s algorithm to compute short discrete logarithms Modifying Shor s algorithm to compute short discrete logarithms Martin Ekerå Decemer 7, 06 Astract We revisit Shor s algorithm for computing discrete logarithms in F p on a quantum computer and modify

More information

IN this paper we study a discrete optimization problem. Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach

IN this paper we study a discrete optimization problem. Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach Constrained Shortest Link-Disjoint Paths Selection: A Network Programming Based Approach Ying Xiao, Student Memer, IEEE, Krishnaiyan Thulasiraman, Fellow, IEEE, and Guoliang Xue, Senior Memer, IEEE Astract

More information

arxiv:hep-th/ v2 8 Jun 2000

arxiv:hep-th/ v2 8 Jun 2000 Partially emedding of the quantum mechanical analog of the nonlinear sigma model R. Amorim a, J. Barcelos-Neto and C. Wotzasek c Instituto de Física Universidade Federal do Rio de Janeiro RJ 1945-970 -

More information

ROUNDOFF ERRORS; BACKWARD STABILITY

ROUNDOFF ERRORS; BACKWARD STABILITY SECTION.5 ROUNDOFF ERRORS; BACKWARD STABILITY ROUNDOFF ERROR -- error due to the finite representation (usually in floatingpoint form) of real (and complex) numers in digital computers. FLOATING-POINT

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

1. Define the following terms (1 point each): alternative hypothesis

1. Define the following terms (1 point each): alternative hypothesis 1 1. Define the following terms (1 point each): alternative hypothesis One of three hypotheses indicating that the parameter is not zero; one states the parameter is not equal to zero, one states the parameter

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

1 Systems of Differential Equations

1 Systems of Differential Equations March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

More information

Logarithms. For example:

Logarithms. For example: Math Review Summation Formulas Let >, let A, B, and C e constants, and let f and g e any functions. Then: f C Cf ) ) S: factor out constant ± ± g f g f ) ) )) ) S: separate summed terms C C ) 6 ) ) Computer

More information

Long non-crossing configurations in the plane

Long non-crossing configurations in the plane Long non-crossing configurations in the plane Adrian Dumitrescu Csaa D. Tóth July 4, 00 Astract We revisit some maximization prolems for geometric networks design under the non-crossing constraint, first

More information

Let N > 0, let A, B, and C be constants, and let f and g be any functions. Then: S2: separate summed terms. S7: sum of k2^(k-1)

Let N > 0, let A, B, and C be constants, and let f and g be any functions. Then: S2: separate summed terms. S7: sum of k2^(k-1) Summation Formulas Let > 0, let A, B, and C e constants, and let f and g e any functions. Then: k Cf ( k) C k S: factor out constant f ( k) k ( f ( k) ± g( k)) k S: separate summed terms f ( k) ± k g(

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Stability Domain of a Linear Differential Equation with Two Delays

Stability Domain of a Linear Differential Equation with Two Delays ELSEVIER An International Journal Availale online at www.sciencedirect.com computers &.c,..c. ~--~c,..c.. mathematics with applications Computers and Mathematics with Applications 51 (2006) 153-159 www.elsevier.com/locate/camwa

More information

Vector Spaces. EXAMPLE: Let R n be the set of all n 1 matrices. x 1 x 2. x n

Vector Spaces. EXAMPLE: Let R n be the set of all n 1 matrices. x 1 x 2. x n Vector Spaces DEFINITION: A vector space is a nonempty set V of ojects, called vectors, on which are defined two operations, called addition and multiplication y scalars (real numers), suject to the following

More information

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model

Luis Manuel Santana Gallego 100 Investigation and simulation of the clock skew in modern integrated circuits. Clock Skew Model Luis Manuel Santana Gallego 100 Appendix 3 Clock Skew Model Xiaohong Jiang and Susumu Horiguchi [JIA-01] 1. Introduction The evolution of VLSI chips toward larger die sizes and faster clock speeds makes

More information

Superluminal Hidden Communication as the Underlying Mechanism for Quantum Correlations: Constraining Models

Superluminal Hidden Communication as the Underlying Mechanism for Quantum Correlations: Constraining Models 38 Brazilian Journal of Physics, vol. 35, no. A, June, 005 Superluminal Hidden Communication as the Underlying Mechanism for Quantum Correlations: Constraining Models Valerio Scarani and Nicolas Gisin

More information

RATIONAL EXPECTATIONS AND THE COURNOT-THEOCHARIS PROBLEM

RATIONAL EXPECTATIONS AND THE COURNOT-THEOCHARIS PROBLEM RATIONAL EXPECTATIONS AND THE COURNOT-THEOCHARIS PROBLEM TÖNU PUU Received 18 April 006; Accepted 1 May 006 In dynamic models in economics, often rational expectations are assumed. These are meant to show

More information

arxiv: v1 [cs.gt] 4 May 2015

arxiv: v1 [cs.gt] 4 May 2015 Econometrics for Learning Agents DENIS NEKIPELOV, University of Virginia, denis@virginia.edu VASILIS SYRGKANIS, Microsoft Research, vasy@microsoft.com EVA TARDOS, Cornell University, eva.tardos@cornell.edu

More information

A Unified Continuous Greedy Algorithm for Submodular Maximization

A Unified Continuous Greedy Algorithm for Submodular Maximization A Unified Continuous Greedy Algorithm for Sumodular Maximization Moran Feldman Technion Joseph (Seffi) Naor Technion Roy Schwartz Technion Astract The study of cominatorial prolems with a sumodular ojective

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Sharp estimates of bounded solutions to some semilinear second order dissipative equations

Sharp estimates of bounded solutions to some semilinear second order dissipative equations Sharp estimates of ounded solutions to some semilinear second order dissipative equations Cyrine Fitouri & Alain Haraux Astract. Let H, V e two real Hilert spaces such that V H with continuous and dense

More information

Smooth Projective Hashing and Two-Message Oblivious Transfer

Smooth Projective Hashing and Two-Message Oblivious Transfer Smooth Projective Hashing and Two-Message Olivious Transfer Shai Halevi IBM Research Yael Tauman Kalai Microsoft Research Octoer 31, 2010 Astract We present a general framework for constructing two-message

More information

arxiv: v1 [math.oc] 22 Mar 2018

arxiv: v1 [math.oc] 22 Mar 2018 OPTIMALITY OF REFRACTION STRATEGIES FOR A CONSTRAINED DIVIDEND PROBLEM MAURICIO JUNCA 1, HAROLD MORENO-FRANCO 2, JOSÉ LUIS PÉREZ 3, AND KAZUTOSHI YAMAZAKI 4 arxiv:183.8492v1 [math.oc] 22 Mar 218 ABSTRACT.

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Max-margin structured output learning in L 1 norm space

Max-margin structured output learning in L 1 norm space Max-margin structured output learning in L norm space Sandor Szedmak ISIS Group, Electronics and Computer Science University of Southampton Southampton, United Kingdom ss03v@ecssotonacuk Yizhao Ni ISIS

More information

Expansion formula using properties of dot product (analogous to FOIL in algebra): u v 2 u v u v u u 2u v v v u 2 2u v v 2

Expansion formula using properties of dot product (analogous to FOIL in algebra): u v 2 u v u v u u 2u v v v u 2 2u v v 2 Least squares: Mathematical theory Below we provide the "vector space" formulation, and solution, of the least squares prolem. While not strictly necessary until we ring in the machinery of matrix algera,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Mathematics Background

Mathematics Background UNIT OVERVIEW GOALS AND STANDARDS MATHEMATICS BACKGROUND UNIT INTRODUCTION Patterns of Change and Relationships The introduction to this Unit points out to students that throughout their study of Connected

More information

Subclasses of Analytic Functions. Involving the Hurwitz-Lerch Zeta Function

Subclasses of Analytic Functions. Involving the Hurwitz-Lerch Zeta Function International Mathematical Forum, Vol. 6, 211, no. 52, 2573-2586 Suclasses of Analytic Functions Involving the Hurwitz-Lerch Zeta Function Shigeyoshi Owa Department of Mathematics Kinki University Higashi-Osaka,

More information

Comments on A Time Delay Controller for Systems with Uncertain Dynamics

Comments on A Time Delay Controller for Systems with Uncertain Dynamics Comments on A Time Delay Controller for Systems with Uncertain Dynamics Qing-Chang Zhong Dept. of Electrical & Electronic Engineering Imperial College of Science, Technology, and Medicine Exhiition Rd.,

More information

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)).

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)). Difference Equations to Differential Equations Section 8.5 Applications: Pendulums Mass-Spring Systems In this section we will investigate two applications of our work in Section 8.4. First, we will consider

More information

Determinants of generalized binary band matrices

Determinants of generalized binary band matrices Determinants of generalized inary and matrices Dmitry Efimov arxiv:17005655v1 [mathra] 18 Fe 017 Department of Mathematics, Komi Science Centre UrD RAS, Syktyvkar, Russia Astract Under inary matrices we

More information

radio sky. In other words, we are interested in the local maxima of the rightness distriution, i.e., of the function y(x) that descries how the intens

radio sky. In other words, we are interested in the local maxima of the rightness distriution, i.e., of the function y(x) that descries how the intens Checking if There Exists a Monotonic Function That Is Consistent with the Measurements: An Ecient Algorithm Kavitha Tupelly 1, Vladik Kreinovich 1, and Karen Villaverde 2 1 Department of Computer Science,

More information

Polynomial Degree and Finite Differences

Polynomial Degree and Finite Differences CONDENSED LESSON 7.1 Polynomial Degree and Finite Differences In this lesson, you Learn the terminology associated with polynomials Use the finite differences method to determine the degree of a polynomial

More information

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS

1Number ONLINE PAGE PROOFS. systems: real and complex. 1.1 Kick off with CAS 1Numer systems: real and complex 1.1 Kick off with CAS 1. Review of set notation 1.3 Properties of surds 1. The set of complex numers 1.5 Multiplication and division of complex numers 1.6 Representing

More information

Bayesian inference with reliability methods without knowing the maximum of the likelihood function

Bayesian inference with reliability methods without knowing the maximum of the likelihood function Bayesian inference with reliaility methods without knowing the maximum of the likelihood function Wolfgang Betz a,, James L. Beck, Iason Papaioannou a, Daniel Strau a a Engineering Risk Analysis Group,

More information

CPM: A Covariance-preserving Projection Method

CPM: A Covariance-preserving Projection Method CPM: A Covariance-preserving Projection Method Jieping Ye Tao Xiong Ravi Janardan Astract Dimension reduction is critical in many areas of data mining and machine learning. In this paper, a Covariance-preserving

More information

The WHILE Hierarchy of Program Schemes is Infinite

The WHILE Hierarchy of Program Schemes is Infinite The WHILE Hierarchy of Program Schemes is Infinite Can Adam Alayrak and Thomas Noll RWTH Aachen Ahornstr. 55, 52056 Aachen, Germany alayrak@informatik.rwth-aachen.de and noll@informatik.rwth-aachen.de

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Summation Formulas. Math Review. Let N > 0, let A, B, and C be constants, and let f and g be any functions. Then: S1: factor out constant

Summation Formulas. Math Review. Let N > 0, let A, B, and C be constants, and let f and g be any functions. Then: S1: factor out constant Computer Science Dept Va Tech August 005 005 McQuain WD Summation Formulas Let > 0, let A, B, and C e constants, and let f and g e any functions. Then: f C Cf ) ) S: factor out constant g f g f ) ) ))

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Alexandr Kazda 1 Department of Algebra, Charles University, Prague, Czech Republic

Alexandr Kazda 1 Department of Algebra, Charles University, Prague, Czech Republic #A3 INTEGERS 9 (009), 6-79 CONVERGENCE IN MÖBIUS NUMBER SYSTEMS Alexandr Kazda Department of Algera, Charles University, Prague, Czech Repulic alexak@atrey.karlin.mff.cuni.cz Received: 0/30/08, Accepted:

More information

ON STRATEGY-PROOF SOCIAL CHOICE BETWEEN TWO ALTERNATIVES

ON STRATEGY-PROOF SOCIAL CHOICE BETWEEN TWO ALTERNATIVES Discussion Paper No. 1013 ON STRATEGY-PROOF SOCIAL CHOICE BETWEEN TWO ALTERNATIVES Ahinaa Lahiri Anup Pramanik Octoer 2017 The Institute o Social and Economic Research Osaka University 6-1 Mihogaoka, Iaraki,

More information

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79 Topic one: Production line profit maximization subject to a production rate constraint c 21 Chuan Shi Topic one: Line optimization : 22/79 Production line profit maximization The profit maximization problem

More information

Preemption Delay Analysis for Floating Non-Preemptive Region Scheduling

Preemption Delay Analysis for Floating Non-Preemptive Region Scheduling Preemption Delay Analysis for Floating Non-Preemptive Region Scheduling José Marinho, Vincent Nélis, Stefan M. Petters, Isaelle Puaut To cite this version: José Marinho, Vincent Nélis, Stefan M. Petters,

More information

Section 2.1: Reduce Rational Expressions

Section 2.1: Reduce Rational Expressions CHAPTER Section.: Reduce Rational Expressions Section.: Reduce Rational Expressions Ojective: Reduce rational expressions y dividing out common factors. A rational expression is a quotient of polynomials.

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

MINIMIZATION OF A CONVEX SEPARABLE EXPONENTIAL FUNCTION SUBJECT TO LINEAR EQUALITY CONSTRAINT AND BOX CONSTRAINTS

MINIMIZATION OF A CONVEX SEPARABLE EXPONENTIAL FUNCTION SUBJECT TO LINEAR EQUALITY CONSTRAINT AND BOX CONSTRAINTS ournl of Pure n Applie Mthemtics Avnces n Applictions Volume 9 Numer 2 203 Pges 07-35 MINIMIZATION OF A CONVEX SEPARABLE EXPONENTIAL FUNCTION SUBECT TO LINEAR EQUALITY CONSTRAINT AND BOX CONSTRAINTS Deprtment

More information

THE BALANCED DECOMPOSITION NUMBER AND VERTEX CONNECTIVITY

THE BALANCED DECOMPOSITION NUMBER AND VERTEX CONNECTIVITY THE BALANCED DECOMPOSITION NUMBER AND VERTEX CONNECTIVITY SHINYA FUJITA AND HENRY LIU Astract The alanced decomposition numer f(g) of a graph G was introduced y Fujita and Nakamigawa [Discr Appl Math,

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Boundary of the Set of Separable States

Boundary of the Set of Separable States Boundary of the Set of Separale States Mingjun Shi, Jiangfeng Du Laoratory of Quantum Communication and Quantum Computation, Department of Modern Physics, University of Science and Technology of China,

More information

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley

More information

OBJECTIVE 4 EXPONENTIAL FORM SHAPE OF 5/19/2016. An exponential function is a function of the form. where b > 0 and b 1. Exponential & Log Functions

OBJECTIVE 4 EXPONENTIAL FORM SHAPE OF 5/19/2016. An exponential function is a function of the form. where b > 0 and b 1. Exponential & Log Functions OBJECTIVE 4 Eponential & Log Functions EXPONENTIAL FORM An eponential function is a function of the form where > 0 and. f ( ) SHAPE OF > increasing 0 < < decreasing PROPERTIES OF THE BASIC EXPONENTIAL

More information

New Infeasible Interior Point Algorithm Based on Monomial Method

New Infeasible Interior Point Algorithm Based on Monomial Method New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

RED. Name: Math 290 Fall 2016 Sample Exam 3

RED. Name: Math 290 Fall 2016 Sample Exam 3 RED Name: Math 290 Fall 2016 Sample Exam 3 Note that the first 10 questions are true false. Mark A for true, B for false. Questions 11 through 20 are multiple choice. Mark the correct answer on your ule

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Lecture 6 January 15, 2014

Lecture 6 January 15, 2014 Advanced Graph Algorithms Jan-Apr 2014 Lecture 6 January 15, 2014 Lecturer: Saket Sourah Scrie: Prafullkumar P Tale 1 Overview In the last lecture we defined simple tree decomposition and stated that for

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Weak Keys of the Full MISTY1 Block Cipher for Related-Key Cryptanalysis

Weak Keys of the Full MISTY1 Block Cipher for Related-Key Cryptanalysis Weak eys of the Full MISTY1 Block Cipher for Related-ey Cryptanalysis Jiqiang Lu 1, Wun-She Yap 1,2, and Yongzhuang Wei 3,4 1 Institute for Infocomm Research, Agency for Science, Technology and Research

More information

Competing Auctions. Glenn Ellison*, Drew Fudenberg**, and Markus Mobius** First draft: November 28, This draft: March 6, 2003

Competing Auctions. Glenn Ellison*, Drew Fudenberg**, and Markus Mobius** First draft: November 28, This draft: March 6, 2003 Competing Auctions Glenn Ellison, Drew Fudenerg, and Markus Moius First draft: Novemer 8, 00 This draft: March 6, 003 This paper studies the conditions under which two competing and otherwise identical

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline

More information

On the large time behavior of solutions of fourth order parabolic equations and ε-entropy of their attractors

On the large time behavior of solutions of fourth order parabolic equations and ε-entropy of their attractors On the large time ehavior of solutions of fourth order paraolic equations and ε-entropy of their attractors M.A. Efendiev & L.A. Peletier Astract We study the large time ehavior of solutions of a class

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

On Universality of Blow-up Profile for L 2 critical nonlinear Schrödinger Equation

On Universality of Blow-up Profile for L 2 critical nonlinear Schrödinger Equation On Universality of Blow-up Profile for L critical nonlinear Schrödinger Equation Frank Merle,, Pierre Raphael Université de Cergy Pontoise Institut Universitaire de France Astract We consider finite time

More information

Beyond Loose LP-relaxations: Optimizing MRFs by Repairing Cycles

Beyond Loose LP-relaxations: Optimizing MRFs by Repairing Cycles Beyond Loose LP-relaxations: Optimizing MRFs y Repairing Cycles Nikos Komodakis 1 and Nikos Paragios 2 1 University of Crete, komod@csd.uoc.gr 2 Ecole Centrale de Paris, nikos.paragios@ecp.fr Astract.

More information

Three Models and Some Theorems on Decomposition of Boolean Functions

Three Models and Some Theorems on Decomposition of Boolean Functions Three Models and Some Theorems on Decomposition of Boolean Functions Steinach, Bernd Freierg University of Mining and Technology stein@informati.tu-freierg.de Zarevsij, Aradij Institute of Engineering

More information

TIGHT BOUNDS FOR THE FIRST ORDER MARCUM Q-FUNCTION

TIGHT BOUNDS FOR THE FIRST ORDER MARCUM Q-FUNCTION TIGHT BOUNDS FOR THE FIRST ORDER MARCUM Q-FUNCTION Jiangping Wang and Dapeng Wu Department of Electrical and Computer Engineering University of Florida, Gainesville, FL 3611 Correspondence author: Prof.

More information

QUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE

QUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE 6 QUADRATIC EQUATIONS In this lesson, you will study aout quadratic equations. You will learn to identify quadratic equations from a collection of given equations and write them in standard form. You will

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information