MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND ITS APPLICATIONS

Size: px
Start display at page:

Download "MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND ITS APPLICATIONS"

Transcription

1 J. Appl. Math. & Computing Vol ), No. 1-2, pp MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND ITS APPLICATIONS SAHIDUL ISLAM AND TAPAN KUMAR ROY Abstract. In this paper, we propose unconstrained and constrained posynomial Geometric Programming GP) problem with negative or positive integral degree of difficulty. Conventional GP approach has been modified to solve some special type of GP problems. In specific case, when the degree of difficulty is negative, the normality and the orthogonality conditions of the dual program give a system of linear equations. No general solution vector exists for this system of linear equations. But an approximate solution can be determined by the least square and also max-min method. Here, modified form of geometric programming method has been demonstrated and for that purpose necessary theorems have been derived. Finally, these are illustrated by numerical examples and applications. AMS Mathematics Subject Classification: 90B05, 90C10, 65K18 Key words and phrases : Geometric programming, residual vector, least square method, max-min method, modified geometric programming problem. 1. Introduction Since late 1960 s, Geometric Programming GP) has been known and used in various fields like OR, Engineering sciences etc.). Duffin, Peterson and Zener [5] and Zener [14] discussed the basic theories on GP with engineering application in their books. Another famous book on GP and its application appeared in 1976 [3]. There are many references on applications and the methods of GP in the survey paper by Ecker [6]. They described GP with positive or zero degree Received June 13, Revised April 17, Corresponding author. This research was supported by C.S.I.R. junior research fellowship in the Department of Mathematics, Bengal Engineering College A Deemed University). This support is great fully acknowledged. c 2005 Korean Society for Computational & Applied Mathematics and Korean SIGGAM. 121

2 122 Sahidul Islam and Tapan Kumar Roy of difficulty. But there may be some problems on GP with negative degree of difficulty. Sinha et al. [13] proposed it theoretically. Abou-El-Ata and his group applied modified form of GP in inventory models [1], [7]). Authors [8] also applied MGP with negative degree of difficulty in an inventory model. In this paper we have proposed unconstrained / constrained GP and modified form of Geometric Programming MGP) problem with negative or positive integral degree of difficulty. The modified form of geometric programming method has been demonstrated and for that purpose necessary theorems have been derived. The least square and max-min methods are described to get approximate solutions of dual variables of GP/MGP problems with negative degree of difficulty. Finally, these are illustrated by numerical examples and applications. 2. Unconstrained problem 2.1. Geometric Programming GP) problem Primal program : Primal Geometric Programming PGP) problem is : Minimize gt) = C k m t α kj j 1) t j > 0, j =1, 2,...,m). Here C k > 0), k =1, 2,...,T 0 ) and α kj be any real number. It is an unconstrained posynomial geometric programming problem with the Degree of Difficulty DD) = T 0 m + 1). Dual program : Dual Programming DP) problem of 1) is : δ k =1, T 0 Maximize νδ) = α kj δ k =0, j =1, 2,...,m), δ k > 0, k =1, 2,...,T 0 ), T 0 Ck δ k ) δk 2) Normality condition) Orthogonality conditions) Positivity conditions)

3 Modified geometric programming and its applications 123 where δ =δ 1,δ 2,...,δ T0 ) T. Case I : For T 0 m+1, the dual program presents a system of linear equations for the dual variables, where the number of linear equations is either less than or equal to the dual variables. More or unique solution vector exist for the dual variable vectors [3]. Case II : For T 0 < m + 1, the dual program presents a system of linear equations for the dual variables, where the number of linear equations is greater than the number of dual variables. In this case generally no solution vector exists for the dual variables. However, one can get an approximate solution vector for this system using either the Least Square LS) or Max-Min MM) [4] method. These are applied to solve such a system of linear equations. Ones optimal dual variable vector δ are known, the corresponding values of the primal variable vector t is found from the following relations : m C k t α kj j = δ kν δ ), k =1, 2,...,T 0 ). 3) Taking logarithms in 3), T 0 log-linear simultaneous equations are transformed as δ α kj log t j ) = log k ν δ ) ), k =1, 2,...,T 0 ). 4) C k It is a system of linear equations in x j = log t j ) for j =1, 2,...,m. Since there are more primal variables t j than the number of terms T 0, many solutions t j may exist. So, to find the optimal primal variables t j, it remains to minimize the primal objective function with respect to reduced m T 0 0) variables. When m T 0 = 0 i.e. number of primal variables = number of log-linear equations, primal variables can be determined uniquely from log-linear equations Residual vector A system of linear equations say m equations and n unknown variables with m>n) takes the form α δ j = b i, i =1, 2,...,m) 5) i.e., Aδ = B, where A =[α ] m n,δ =[δ 1,δ 2,...,δ n ] T and B =[b 1,b 2,...,b m ] T. The residual vector is defined as R = Aδ B, 6) where R =[r 1,r 2,...,r m ] T.

4 124 Sahidul Islam and Tapan Kumar Roy Least Square LS) method Approximate solutions for the system of linear equations 5) can be determined by LS method [9]. Find δ j j =1, 2,...,n), which minimize R T R = ri 2. Setting the partial derivatives of ri 2 with respect to δ j, and equating them to zero, following n normal equations are obtained : α 2 i1)δ 1 + α i1 α i2 )δ α i1 α in )δ n = α i1 b i ), α i2 α i1 )δ 1 + α 2 i2 )δ α i2 α in )δ n = α i2 b i ),... α in α i1 )δ 1 + α in α i2 )δ α 2 in )δ n = α in b i ). It is a symmetric positive definite system of equations in δ j and determines the components of dual variables δ j j =1, 2,...,n) Max-Min MM) method Approximate solutions for the system of linear equations 5) can also be determined by MM method. By this method one can obtain the dual variable vector δ for which the absolutely largest component of the residual vector R= Aδ B) becomes minimum. i.e., MinMax r 1, r 2,..., r m )) 8) α i1 δ 1 + α i2 δ α in δ n b i = r i, i =1, 2,...,m). δ j 0, j =1, 2,...,n). Considering an auxiliary variable r, the above problem 8) is equivalent to Minimize r 9) α i1 δ 1 + α i2 δ α in δ n b i r, α i1 δ 1 α i2 δ 2... α in δ n + b i r, δ j 0, j =1, 2,...,n). } i =1, 2,...,m) 7)

5 Modified geometric programming and its applications 125 Denoting δj r = δ j, j =1, 2,...,m) and 1 r = δ 0, 9) becomes a Linear Programming Problem LPP) as follows : Maximize δ 0 10) α i1 δ 1 + α i2δ α inδ n b iδ 0 1, } α i1 δ 1 α i2 δ 2... α in δ n + b i δ 0 i =1, 2,...,m) 1, δ j 0, j =0, 1, 2,...,n). After calculating δ 0,δ 1,δ 2,...,δ n from this LPP 10), approximate solutions of dual variables δ j can be obtained. So, applying either LS or MM method dual variables can be determined. Then corresponding optimal dual objective function of 2) is found. Example 1 Unconstrained GP problem). Min Zt) =t 1 t 2 t t 3 t1 1 t t 3 a) t 1,t 2,t 3 > 0. Here DD = ) = 1< 0). It is a polynomial geometric programming problem with negative degree of difficulty. Geometric programming method : ) δ1 ) δ2 ) δ Maximize νδ )= b) δ 1 δ 1 + δ 2 + δ 3 =1, δ 1 δ 2 =0, δ 1 δ 2 =0, 2δ 1 + δ 2 + δ 3 =0, δ 1,δ 2,δ 3 > 0. It is a system of 4 linear equations with 3 unknowns variables. Approximate solutions of this system of linear equations by LS method are δ1 =0.333,δ 2 = 0.333,δ3 =0.333 and corresponding optimal dual objective value i.e. ν δ )= So for primal decision variables, the following system of non-linear equations is found t 1 t 2 t 2 3 =2.1522, 2t 3 t 1 1 t 1 2 =2.1522, 5t 3 =2.1522, δ 2 δ 3

6 126 Sahidul Islam and Tapan Kumar Roy and the corresponding system of log-linear equations is x 1 + x 2 2x 3 = , x 3 x 1 x 2 = , c) x 3 = , where x i = logt i i =1, 2, 3). Solving this sytem of linear equations c), the optimal primal variables t i = ex i ) i =1, 2, 3) are obtained. These are given in Table 1. Non-Linear Programming NLP) method : From Karush - Kuhn - Tucker necessary conditions, we have t 2 t 2 3 2t 3 t 2 1 t 1 t 1 t 2 3 2t 3 t 1 1 t 2 2t 1 t 2 t t 1 1 t 1 2 =0 2 = = 0 Solving this system of non-linear equations, the optimal solutions t 1,t 2,t 3 are obtained. These are given in the Table 1. It is seen that GP method gives better result than NLP method [2]. Table 1 Optimal solutions of Example 1 Methods t 1 t 2 t 3 Z t ) GP NLP Modified geometric programming MGP) problem Primal program : A special type which is a separable function) of the Primal Geometric Programming PGP) problem is m Minimize gt) = g i t) = U ik t) = C ik t α ikj 11) t > 0, i =1, 2,...,n; j =1, 2,...,m). Here C ik > 0)k =1, 2,...,T 0 ) and α ikj are real number. It is an unconstrained posynomial geometric programming problem with DD= nt 0 nm + 1).

7 Modified geometric programming and its applications 127 Dual program : According to Harari et al. [7], the Dual Programming DP) problem of 11) is as follows : n T 0 ) δik cik Maximize νδ) = 12) = 1 Normality conditions) α ikj =0, j =1, 2,...,m) Orthogonality conditions) > 0, k =1, 2,...,T 0 ), Positivity conditions) i =1, 2,...,n), where δ =δ 11,δ 12,...,,...,δ nt0 ) T. Case I : For nt 0 nm + n, the DP presents a system of linear equations for the dual variables. A solution vector for the dual variables exists [3]. Case II : For nt 0 <nm+ n, in this case generally no solution vector for the dual variables exists. However, using either the LS or MM method one can get an approximate solution vector for this system explained in & 2.1.3). Theorem If t is a feasible vector for the unconstrained PGP 11) and if δ is a feasible vector for the corresponding DP 12), then gt) n n νδ) Primal-Dual Inequality) Proof. The expression for gt) can be written as gt) = C ik m t α ikj. Here the weights are δ i1,δ i2,...,δ it0 and the positive terms are c i1 m t αi1j δ i1, c i2 m t αi2j δ i2,..., m c it0 t αit 0 j δ it0 for i =1, 2,...,n.

8 128 Sahidul Islam and Tapan Kumar Roy Now applying weighted A.M. - G.M. inequality, we get n m δ i1+δ i2+...+δ it0 ) m m c i1 t αi1j + c i2 t αi2j c it0 t αit 0 j δ i1 + δ i δ it0 ) n m c i1 t αi1j δ i1 δ i1 m c i2 δ i2 t αi2j δ i2... m c it0 t αit 0 j δ it0 δ it0 or or gt) ) n gt) n n T0 n T 0 m c ik t α ikj n ) T0 cik m T0 t α ikj δ ik [since from normality condition of 12) i.e., = 1 for i =1, 2,...,n] or = n T 0 ) n gt) n cik n ) δik m t T 0 [since from orthogonality condition of 12) i.e., i.e., This completes the proof. ) δik cik T0 α ikj α ikj =0, i =1, 2,...,n; j =1, 2,...,m)] gt) n n νδ).

9 Modified geometric programming and its applications 129 Theorem If t = t i1,...,t im ) for i = 2,...,n is a solution to the PGP 11), then the corresponding DP 12) is consistent. Moreover the vector δ =δi1,...,δ it 0 )i =1, 2,...,n) is defined by δik = u ikt ) g i t ), k =1, 2,...,T 0), where u ik t )=c ik m t α ikj which is the kth term of gt) for ith item is a solution for DP and equality holds in the primal-dual inequality i.e., gt )=n n νδ ). Proof. A typical term of gt) is u ik t) =c ik t α ik1 t i1 t α ik2 i2,...,t α ikm im, i =1, 2,...,n; k =1, 2,...,T 0). 13) So, u ik t) t = α ikj u ik. 14) t Since t =t i1,...,t im ) is a minimizer for gt) it follows that Using 14) we have gt ) t = u ik t ) t =0. 15) α ikj u ik t )=0. 16) Since g i t ) > 0, we can divide both sides of 16) by g i t ) to obtain uik t ) ) α ikj g i t =0. 17) ) Consequently, if we define δik = u ikt ) g i t ), then δ =δi1,...,δ it 0 ) satisfies the orthogonality condition for the DP and Also δik > 0. So the positivity condition is satisfied. Finally, T 0 δik = uik t ) ) g i t = g it ) ) g i t =1, i =1, 2,...,n) ) Hence the normality conditions hold. We conclude that the vector δ is feasible for the dual problem. So the DP is consistent.

10 130 Sahidul Islam and Tapan Kumar Roy Also Now we have g i t ) T 0 = u 1k t ) δ 1k = u 2kt ) δ 2k =...= u nkt ) δ nk, k =1, 2,...,T 0 ). gt ) n g ) i t ) = n = = = T 0 u1k t ) δ 1k T 0 ) δ c1k n T 0 δ 1k cik δ ik T0 δ 1k+δ 2k +...+δ nk ) ) δ 1k c 2k 1k u 2k t ) δ 2k ) δ δ 2k ) δ ik = νδ ) ) δ 2k... cnk 2k... unk t ) δ nk ) δ nk δ nk ) δ nk i.e., gt )=n n νδ ). Hence the equality holds in the Primal-Dual inequality. This implies that δ is an optimal dual variable of the DP with components δik = u ikt ) g i t ), i =1, 2,...,n; k =1, 2,...,T 0). This compltes the proof. We can get dual variables which optimize ν δ ) such that g i t) u ik t) = = n n νδ) n following theorem 2.2.2)

11 Modified geometric programming and its applications 131 i.e., So, c ik m = n νδ). u ik t) = n νδ). t α ikj = n νδ), i =1, 2,...,n; k =1, 2,...,T 0 ). 18) Taking logarithms in 18), the linear simultaneous equations are transformed as δ ) n ik νδ) α ikj logt ) = log. 19) c ik It is a system of linear equations in x = Logt ). Since there are more primal variables t than the number of terms nt 0, many solutions t may exist. So, to find the optimal primal variables t, it remains to minimize the primal objective function with respect to reduced nm nt 0 0) variables. When nm nt 0 =0, primal variables can be determined uniquely from log-linear equations. Example 2 Unconstrained MGP problem). Minimize Zt) =t 11 t 21 t t 31t 1 11 t t 31 + t 12 t 22 t t 32t 2 22 t t 32 t > 0, i =1, 2, ; j =1, 2, 3). This problem can be written as Minimize Zt) = C ik t α ikj 20) t > 0, i =1, 2; j =1, 2, 3), where C 11 = 1,C 12 =1,C 13 =2,C 21 =3,C 22 =5,C 23 =4, α 111 = 1,α 211 =1,α 121 = 2, α 221 =1,α 131 =1,α 231 = 3, α 112 = 1, α 212 = 1, α 122 =1,α 222 = 2,α 132 = 2,α 232 =1, α 113 = 0,α 213 =0,α 123 =1,α 223 =0,α 133 =0,α 233 =1. Dual Programming of MGP 20) is as follows : 2 3 Maximize vδ) = ) δik cik

12 132 Sahidul Islam and Tapan Kumar Roy δ 11 + δ 12 + δ 13 =1, δ 21 + δ 22 + δ 23 =1, δ 11 δ 12 =0, 2δ 11 + δ 12 + δ 13 =0, δ 11 2δ 12 =0, δ 21 δ 22 =0, δ 21 2δ 22 =0, 3δ 21 + δ 22 + δ 23 =0 where δ =δ 11,δ 12,δ 13,δ 21,δ 22,δ 23 ) T. There is a system of eight linear equations with six unknown variable. Applying LS method the above system of linear equations reduces to δ 11 δ 12 =0, 2δ 11 + δ 12 + δ 13 =0, δ 11 + δ 12 + δ 13 =1, δ 21 2δ 22 =0, 3δ 21 + δ 22 + δ 23 =0, δ 21 + δ 22 + δ 23 =1, δ 11,δ 12,δ 13,δ 21,δ 22,δ 23 > 0. Approximate solutions of this sytem of linear equations are δ11 = 0.333, δ12 =0.333, δ13 =0.333, δ21 = 0.25, δ22 =0.125, δ 23 =0.625 and optimal dual objective value i.e., ν δ )= The value of the objective function is gt )= = So the Theorem is verified. And following system of non-linear equations gives optimal primal variables t 11 t 21 t 2 31 =4.3856, t 12 t 22 t 2 32 =3.2925, 2t 31 t11 1 t 1 21 =4.3856, 2t 32 t12 1 t 1 22 =1.6463, 5t 31 =8.2313, 5t 32 = Solving this system of non-linear equations, optimal solutions are determined. The optimal solutions by MGP of this problem are given in the Table 2. Here optimal solutions of Example 2 by NLP are also given in the Table 2. It is seen that MGP method gives better than NLP method, which is expected.

13 Modified geometric programming and its applications 133 Table 2 Optimal solutions of Example 2 Methods t 11 t 21 t 31 t 12 t 22 t 32 Z t ) MGP NLP Constrained problem 3.1. Geometric Programming GP) problem Primal program : g r t) = Minimize g 0 t) = T r C rk +T r 1 m m C 0k t α rkj j 1 t α 0kj j 21) r =1, 2,...,l) t j > 0, j =1, 2,...,m); where C 0k > 0)k =1, 2,...,T 0 ), C rk > 0) and α rkj k =1, 2,...,1+T r 1,...,T r ; r =0, 1, 2,...,l; j =1, 2,...,m) are real numbers. It is a constrained posynomial PGP problem. The number of terms in each posynomial constraint function varies and it is denoted by T r for each r = 0, 1, 2,...l. Let T = T 0 + T 1 + T T l be the total number of terms in the primal program. The Degree of Difficulty is DD) = T m + 1). Dual program : The dual programming of 21) is as follows : l T r ) δrk crk Maximize νδ) = δ 0k =1, r=0 δ rk T r Normality condition) δ rs δrk 22) s=1+t r 1

14 134 Sahidul Islam and Tapan Kumar Roy l T r α rkj δ rk =0, j =1, 2,...,m), Orthogonality conditions) r=0 δ rk > 0, r =0, 1, 2,...,l; k =1, 2,...,T r ). Positivity conditions) Case I. For T m+1, the dual program presents a system of linear equations for the dual variables. A solution vector exists for the dual variables [3]. Case II. For T < m + 1, in this case generally no solution vector exists for the dual variables. However, one can get an approximate solution vector for this system using either LS or MM method discussed earlier. The solution procedure of this GP problem is same as the unconstrained GP problem. Example 3 Constrained GP problem). Minimize Zt) =2Ot 1 3 t Ot 6 2 t2 4 t 3 1t t1 1 t4 2t 2 3t t 1,t 2,t 3,t 4 > 0. It is a constrained posynomial geometric programming problem with degree of difficulty 1. This problem is also solved by GP and NLP as discussed before and optimal solutions are shown in Table 3. Table 3 Optimal solutions of Example 3 Methods t 1 t 2 t 3 t 4 Z t ) GP NLP Modified Geometric Programming MGP) problem Primal program : Minimize g 0 t) = g r t) = t > 0, g i0 t) = T r m C rik k=t r 1+1 m C 0ik t α rikj 1, i =1, 2,...,n; j =1, 2,...,m) t α 0ikj 23) r =1, 2,...,l)

15 Modified geometric programming and its applications 135 where C ik > 0) and α rikj k =1, 2,...,T r 1 +1,...,T r ) are real number. An associated Convex Geoemetric Programming CGP) problem is m Minimize h 0 x) = h i0 x) = c 0ik e α 0ikj x 24) h r x) 1= T r k=t r 1+1 c rik e m α rikjx 1 0, x > 0i =1, 2,...,n; j =1, 2,...,m). r =1, 2,...,l) Moreover, because t = e x is a strictly increasing function, the GP and CGP are equivalent in the sense that t =t i1,t i2,...,t im ) is a solution of GP 23) if and only if x =x i1,x i2,...,x im ) is a solution of CGP 24) where t = ex. Dual program : According to Harari & Ata [7], the dual programming of 23) is n T r ) δik n ) δik cik Maximize νδ) = δ sk r =0, 1, 2,...,l) 25) α 0ikj + l T r r=1 k=t r 1+1 s=1 = 1 Normality conditions) α rikj =0j =1, 2,...,m) Orthogonality conditions) > 0, k =1, 2,...,T r 1 +1,...,T r ; r =1, 2,...,l) Positivity conditions) for i =1, 2,...,n, where δ =δ 11,δ 12,δ 1k,...,δ nt0 ) T. Case I. For nt 0 nm+n, the dual program presents a system of linear equations for the dual variables. A solution vector exists for the dual variables [3]. Case II. For nt 0 <nm+ n, in this case generally no solution vector exists for the dual variables. However, one can get an approximate solution vector for this system using either the LS or MM method. The solution procedure of this MGP problem is same as the unconstrained MGP problem. Example 4 Constrained MGP problem). Minimize Zt) =2Ot 1 31 t Ot 6 21 t t 1 32 t t 6 22 t2 42

16 136 Sahidul Islam and Tapan Kumar Roy t 3 11 t t 1 11 t4 21 t2 31 t t3 12 t t 1 12 t4 22 t2 32 t t 11,t 21,t 31,t 41 t 12,t 22,t 32,t 42 > 0. It is solved by MGP and also NLP method, the optimal results are given in Table 4. Table 4 Optimal solutions of Example 4 Methods t 11 t 21 t 31 t 41 t 12 t 22 t 32 t 42 Z t ) MGP NLP Theorem If t is a feasible vector for the constrained PGP 23) and if δ is a feasible vector for the corresponding DP 25), then g 0 t) n n νδ) Primal-Dual Inequality). Proof. The expression for g 0 t) can be written as m c 0ik g 0 t) = t α 0ikj 26) We can apply the weighted A.M. G.M. inequality to this new expression for g 0 t) and obtain n T0 m gt) n T 0 c 0ik t α 0ikj or ) n g0 t) n n ) T0 C0ik m T0 t α 0ikj δ ik

17 Modified geometric programming and its applications 137 Again g r t) can be written as g r t) = [using normality condition] n T 0 ) δik C0ik m T0 = t α 0ikj. 27) T r k=t r 1+1 m c rik t α rikj. 28) Applying weighted A.M. G.M. inequality in 28) we have n δ ik m g r t) n T r c rik and g r t)) n n T r k=t r 1+1 k=t r 1+1 crik ) δik m t t α rikj Tr k=t r 1 +1 α rikj n ) δik δ sk, r =1, 2,...,l). n Using 1 g r t)) r =1, 2,...,l) [since g r t) 1r =1, 2,...,l)] n T r ) δik n ) δik crik m Tr k=t 1 δ sk t r 1 +1 α rikj, 29) k=t r 1+1 s=1 r =1, 2,...,l). Multiplying 27) and 29) we have ) n g0 t) n T r ) δik n ) δik Cik δ sk n δ ik s=1 m T0 t α l 0ikj + Tr r=1 k=t r 1 +1 α rikj 30) r =0, 1, 2,...,l). Using Orthogonality conditions the inequality 30) becomes ) n g0 t) n T r ) δik n ) δik cik δ sk, r =0, 1, 2,...,l) n s=1 s=1

18 138 Sahidul Islam and Tapan Kumar Roy i.e., This compltes the proof. g 0 t) n n νδ). Theorem Suppose that the constrained PGP 23) is super-consistent and that t is a solution for GP. Then the corresponding DP 25) is consistent and has a solution δ which satisfies g 0 t )=n n νδ ) and u ik t ) δik = g 0 t ), i =1, 2,...,n; k =1, 2,...,T 0) λ ir δ )u ik t ), i =1, 2,...,n; k = T r 1 +1,...,T r ; r =1, 2,...,l). Proof. Since GP is super-consistent, so is the associated CGP. Also since GP has a solution t =t i1,t i2,...,t im ), the associated CGP has a solution x = x i1,x i2,...,x im ) given by x = lnt. According to the Karush-Kuhn-Tucker K-K-T) conditions, there is a vector λ =λ i1,...,λ il ) such that λ ir 0, 31) λ ir h irx ) 1) = 0, 32) h i0 x ) l + λ h ir x ) ir =0. x x r=1 33) Because t = e x for i =1, 2,...,n; j =1, 2,...,m; it follows that for r =0, 1, 2,...,l h ir x) = h irx) t = g irt) e x. x t x t So, condition 33) is equivalent to g i0 t ) l + λ g ir t ) ir = 0 t t r=1 34) since e x > 0 but t > 0. Hence 34) is equivalent to t g i0 t ) + t l r=1 Now the terms of g ir t) are of the form u ik t) =c rik λ irt g ir t ) =0. 35) t n t α rikj.

19 Modified geometric programming and its applications 139 It is clear that t g ir t ) T r = α rikj u ik t ), i =1, 2,...,n; j =1, 2,...,n; r =1, 2,...,l). t k=t r 1+1 So, 35) implies to l α 0ikj u ik t )+ T r r=1k=t r 1+1 If we divide the last equation by g i0 t )= u ik t ), we obtain T r Define the vector δik by δik = α 0ikj u ik t ) g i0 t ) + λ irα rikj u ik t )=0, i =1, 2,...,n; j =1, 2,...,m). l T r r=1 k=t r 1+1 α rikj λ ir u ikt ) g i0 t ) u ik t ) g i0 t ), i =1, 2,...,n; k =1, 2,...,T 0) =0. λ ir u ik t ) g i0 t, i =1, 2,...,n; k = T r 1 +1,...,T r ; r =1, 2,...,l). ) Note that δik > 0i =1, 2,...,n; k =1, 2,...,T 0) and that for each r 1, either δik > 0 for all k with T r 1 +1 k T r or δik = 0 for all k with T r k T r according as the corresponding Karush-Kuhn-Tucker multipliers λ ir, i =1, 2,...,n; r =1, 2,...,l) is positive or zero. Also observe that vector δ satisfies all of the m exponents constraint equations in DP as well as the constraint T 0 δik = u ik t ) g i0 t ) = g i0t ) g i0 t ) =1. 36) Therefore, δ =δ i1,...,δ it 0 ) is a feasible vector for DP. Hence the DP is consistent. The Karush-Kuhn-Tucker multiplers λ ir are related to the corresponding λ irδ ) in DP as follows : T r T r λ ir δ )= δik = λ u ik t ) ir g i0 t ) = λ ir g ir t ) g i0 t, i =1, 2,...,n; r =1, 2,...,l). )

20 140 Sahidul Islam and Tapan Kumar Roy The Karush-Kuhn-Tucker condition 32) becomes λ ir g irt ) 1) = 0. 37) So, we get λ ir g irt )=λ ir. Therefore, for r =1, 2,...,l and k = T r 1 +1,...,T r, we see that δ ik = λ ir u ikt ) g i0 t ) = λ ir g irt )u ik t ) g i0 t ) = λ ir δ )u ik t ), 38) The fact that δ is feasible for DP and t is feasible for GP implies that g 0 t ) n n νδ ) because of Primal-Dual Inequality. Moreover, the values of δik i =1, 2,...,n; r =1, 2,...,l; k =1, 2,...,T r 1+ 1,...,T r ) are precisely those that force equality in the Arithmetic-Geometric Mean Inequalities that where used to obtain the Duality Inequality. Finally, equation 37) shows that either g ir t ) = 1 or λ ir = 0i = 1, 2,...,n; r = 1, 2,...,l) and equation 38) shows that λ ir = 0 if and only if λ irδ )=0, i =1, 2,...,n; r =1, 2,...,l). This means that the value of δik actually force equality in the Primal-Dual Inequality. This completes the proof. 4. Application 4.1. GP problem Grain-box problem) Problem 1a. It has been decided to shift grain from a warehouse to a factory in an open rectangular box of length x 1 meters, width x 2 meters and height x 3 meters. The bottom, sides and the ends of the box cost $80, $10 and $20/m 2 respectively. It costs $1 for each round trip of the box. Assuming that the box will have no salvage value, find the minimum cost of transporting 80 m 3 of grains.

21 Modified geometric programming and its applications 141 As stated, this problem can be formulated as the unconstrained GP problem 80 Minimize +40x 2 x 3 +20x 1 x 2 +80x 1 x 2, x 1 x 2 x 3 39) where x 1 > 0, x 2 > 0, x 3 > 0. It is an unconstrained posynomial geometric programmiing problem. The optimal dimension of the box are x 1 =1m, x 2 =1/2 m, x 3 =2m and minimum total cost of this problem is $ 200. Problem 1b. Supposed that we now consider the following variant of the above problem similar discussion has done Duffin, Peterson and Zener [1] in their book). It is required that the sides and bottom of the box should be made from scrap materials but only 4 m 2 of these scrap materials are available. This variation of the problem leads us to the following constrained posynomial GP problem : Minimize 80 x 1 x 2 x 3 +40x 2 x 3 2x 1 x 3 + x 1 x 2 4, where x 1 > 0, x 2 > 0,x 3 > 0. 40) Solving this constrained GP problem, we have the minimum total cost $ and optimal dimensions of the box are x 1 =1.58 m, x 2 =1.25 m, x 3 =0.63 m MGP problem Multi-grain-box problem) Problem 2a. Suppose that to shift grains from a warehouse to a factory in a finite number say n) of open rectangular boxes of lengths x 1i meters widths x 2i meters, and heights x 3i meters i =1, 2,...,n). The bottom, sides and the ends of the each box cost $a, $b i and $c i /m 2 respectively. It cost $1 for each round trip of the boxes. Assuming that the boxes will have no salvage value, find the minimum cost of transporting d i m 3 of grain. As stated, this problem can be formulated as an unconstrained MGP problem ) d i Minimize gx) = + a i x 1i x 2i +2b i x 1i x 3i +2c i x 2i x 3i, x 1i x 2i x 3i where x 1i > 0, x 2i > 0, x 3i > 0 i =1, 2,...,n). 41)

22 142 Sahidul Islam and Tapan Kumar Roy In particular here we assume that the transporting d i m 3 of grains by the three different open rectangular boxes whose bottom, sides and the ends of each box costs are given in the Table 5. It is solved and optimal results are shown in the Table 6. Table 5 Input data for Problem 2a ith Box a i $/m 2 ) b i $/m 2 ) c i $/m 2 ) d i m 3 ) i= i= Table 6 Optimal results of Problem 2a ith Box x 1i meter) x 2i meter) x 3i meter) g x )$) i= i= Problem 2b. Suppose that we now consider the following variant of the above problem. It is required that the sides and bottom of the boxes should be made from scrap materials but only wm 2 of these scrap materials are available. This variation of the problem leads us to the following constrained modified geometric program : ) d i Minimize fx) = +2c i x 2i x 3i x 1i x 2i x 3i 2x 1i x 3i + x 1i x 2i ) w, where x 1i > 0, x 2i > 0, x 3i > 0 i =1, 2,...,n). 42) In particular here we assume transporting d i m 3 of grains by the three different open rectanggular boxes. The end of each box cost is c i $/m 2 and amount of the transportin grains by three open rectangular boxes are d i m 3. Input data of this problem is given in the Table 7. It is a constrained posynomial MGP problem. Optimal results of this problem are shown in Table 8.

23 Modified geometric programming and its applications 143 Table 7 Input data for Problem 2b ith Box c i $/m 2 ) d i m 3 ) wm 2 ) i= i= Table 8 Optimal results of Problem 2b ith Box x 1i meter) x 2i meter) x 3i meter) g x )$) i= i= Conclusion We have discussed unconstrained constrained GP and MGP problem with negative or positive integral degree of difficulty. The LS and MM methods are described to solve GP/MGP problem with negative degree of difficulty. Here, modified form of geometric programming method has been demonstrated and for that purpose necessary theorems have been derived. This technique can be applied to solve the different decision making problems like in inventory [10] and other areas). Acknowledgement The authors wish to acknowledge the helpful comments and suggestions of the referee s. References 1. M. O. Abou-El-Ata and K. A. M. Kotb, Multi-item EOQ inventory model with varying holding cost under two restrictions a geometric programming approach, Production Planning & Control 85) 1997), H. Basirzadeh, A. V. Kayyad and Effati, An approach for solving nonlinear problems, J. Appl. Math. & Computing 92) 2002), C. S. Beightler and D. T. Philips, Applied Geometric Programming, Wiley, New York, G. S. Beightler, D. T. Phillips and D. J. Wilde, Foundations of Optimization, Prentice-Hall, Englewood Cliffs, NJ, 1979.

24 144 Sahidul Islam and Tapan Kumar Roy 5. R. J. Duffin, E. L. Peterson and C. M. Zener, Geometric Programming Theory and Applications, Wiley, New York, J. Ecker, Geometric programming : methods, computations and applications, SIAM Rev. 223) 1980), A. M. A. Hariri and M. O. Abou-El-Ata, Multi-item EOQ inventory model with varying holding cost under two restrictions a geometric programming approach, Production Planning & Control 85) 1997), Sahidul Islam and T. K. Roy, An economic production quantity model with flexibility and reliability consideration and demand dependent unit cost under a constraint : Geometric Programming Approach, Proceedings of National Symposium Department of Mathematics, University of Kalyani, India, 21-22th March, 2002, C. L. Lawson and R. J. Hanson, Solving Least Squares Problems, Prentice Hall, Inc., Englewood Cliffs, NJ, A. K. Pal and B. Mandal, An EOQ Model fo r Deteriorating Inventory with Alternating Demand Rate, Korean J. Comput. & Appl. 42) 1997), A. L. Peressini, F. E. Sullivan and J. J. Jr. Uhl, The Mathematics Nonlinear Programming, Springer-Verlag, New York, F. Scheid, Numerical Analysis, McGraw-Hill, New York, S. B. Sinha, A. Biswas and M. P. Biswal, Geometric programming problems with negative degrees of difficulty, European Journal of Operational Research, ), C. Zener, Engineering Design by Geometric Programmin, Wiley, Sahidul Islam received his B. Sc. Hons.) from Rampurhat College under University of Burdwan and M. Sc. at Jadavpur University. In 2003, he received a Junior Research Fellow from Council of Scientific and Industrial Research at Bengal Engineering College Deemed University). His research interests focus on some optimization problem in information and Fuzzy systems and related OR models. Some papers are published and accepted in Proceedings of National Seminars. T. K. Roy is a senior lecturer ini Mathematics at Bengal Engineering College Deemed University). His areas of research are Fuzzy set theory, Applications of OR in information and Fuzzy Systems. He has published papers in various international and national journals including European Journal of Operational Research, Computer and Operation Research, Production Planning and Control, OPSEARCH. Some papers are also published and accepted in Proceedings of international and national Seminars. Department of Mathematics, Bengal Engineering College Deemed University), Howrah , West Bengal, India sahidul@yahoo.ca roy t k@yahoo.co.in

Unconstrained Geometric Programming

Unconstrained Geometric Programming Jim Lambers MAT 49/59 Summer Session 20-2 Lecture 8 Notes These notes correspond to Section 2.5 in the text. Unconstrained Geometric Programming Previously, we learned how to use the A-G Inequality to

More information

NTMSCI 5, No. 4, (2017) 40

NTMSCI 5, No. 4, (2017) 40 NTMSCI 5, No. 4, 40-51 (2017) 40 New Trends in Mathematical Sciences http://dx.doi.org/10.20852/ntmsci.2017.212 A fuzzy inventory model with unit production cost, time depended holding cost, with-out shortages

More information

GEOMETRIC PROGRAMMING: A UNIFIED DUALITY THEORY FOR QUADRATICALLY CONSTRAINED QUADRATIC PRO GRAMS AND /^-CONSTRAINED /^-APPROXIMATION PROBLEMS 1

GEOMETRIC PROGRAMMING: A UNIFIED DUALITY THEORY FOR QUADRATICALLY CONSTRAINED QUADRATIC PRO GRAMS AND /^-CONSTRAINED /^-APPROXIMATION PROBLEMS 1 GEOMETRIC PROGRAMMING: A UNIFIED DUALITY THEORY FOR QUADRATICALLY CONSTRAINED QUADRATIC PRO GRAMS AND /^-CONSTRAINED /^-APPROXIMATION PROBLEMS 1 BY ELMOR L. PETERSON AND J. G. ECKER Communicated by L.

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Pattern Classification, and Quadratic Problems

Pattern Classification, and Quadratic Problems Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization

More information

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING Journal of Applied Analysis Vol. 4, No. (2008), pp. 3-48 SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING I. AHMAD and Z. HUSAIN Received November 0, 2006 and, in revised form, November 6, 2007 Abstract.

More information

Multi level inventory management decisions with transportation cost consideration in fuzzy environment. W. Ritha, S.

Multi level inventory management decisions with transportation cost consideration in fuzzy environment. W. Ritha, S. Annals of Fuzzy Mathematics and Informatics Volume 2, No. 2, October 2011, pp. 171-181 ISSN 2093 9310 http://www.afmi.or.kr @FMI c Kyung Moon Sa Co. http://www.kyungmoon.com Multi level inventory management

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Generalization Of The Secant Method For Nonlinear Equations

Generalization Of The Secant Method For Nonlinear Equations Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

The Concept of Neutrosophic Less Than or Equal To: A New Insight in Unconstrained Geometric Programming

The Concept of Neutrosophic Less Than or Equal To: A New Insight in Unconstrained Geometric Programming 72 The Concept of Neutrosophic Less Than or Equal To: A New Insight in Unconstrained Geometric Programming Florentin Smarandache 1, Huda E. Khalid 2, Ahmed K. Essa 3, Mumtaz Ali 4 1 Department of Mathematics,

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Research Article Geometric Programming Approach to an Interactive Fuzzy Inventory Problem

Research Article Geometric Programming Approach to an Interactive Fuzzy Inventory Problem Advances in Operations Research Volume 2011, Article ID 521351, 17 pages doi:10.1155/2011/521351 Research Article Geometric Programming Approach to an Interactive Fuzzy Inventory Problem Nirmal Kumar Mandal

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY J. Korean Math. Soc. 48 (011), No. 1, pp. 13 1 DOI 10.4134/JKMS.011.48.1.013 NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY Tilak Raj Gulati and Shiv Kumar Gupta Abstract. In this

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation! Dennis L. Bricker Dept of Mechanical & Industrial Engineering The University of Iowa i.e., 1 1 1 Minimize X X X subject to XX 4 X 1 0.5X 1 Minimize X X X X 1X X s.t. 4 1 1 1 1 4X X 1 1 1 1 0.5X X X 1 1

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington LARGE SCALE GEOMETRIC PROGRAMMING: AN APPLICATION IN CODING THEORY Yaw O. Chang and John K. Karlof Mathematical Sciences Department The University of North Carolina at Wilmington This research was partially

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION R u t c o r Research R e p o r t SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION Ersoy Subasi a Mine Subasi b András Prékopa c RRR 4-006, MARCH, 006 RUTCOR Rutgers Center for Operations Research

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Construction of `Wachspress type' rational basis functions over rectangles

Construction of `Wachspress type' rational basis functions over rectangles Proc. Indian Acad. Sci. (Math. Sci.), Vol. 110, No. 1, February 2000, pp. 69±77. # Printed in India Construction of `Wachspress type' rational basis functions over rectangles P L POWAR and S S RANA Department

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Application of Harmonic Convexity to Multiobjective Non-linear Programming Problems

Application of Harmonic Convexity to Multiobjective Non-linear Programming Problems International Journal of Computational Science and Mathematics ISSN 0974-3189 Volume 2, Number 3 (2010), pp 255--266 International Research Publication House http://wwwirphousecom Application of Harmonic

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian

More information

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS Commun. Korean Math. Soc. 27 (2012), No. 2, pp. 411 423 http://dx.doi.org/10.4134/ckms.2012.27.2.411 OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX

More information

1. f(β) 0 (that is, β is a feasible point for the constraints)

1. f(β) 0 (that is, β is a feasible point for the constraints) xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Optimization in Information Theory

Optimization in Information Theory Optimization in Information Theory Dawei Shen November 11, 2005 Abstract This tutorial introduces the application of optimization techniques in information theory. We revisit channel capacity problem from

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Research Article A Deterministic Inventory Model of Deteriorating Items with Two Rates of Production, Shortages, and Variable Production Cycle

Research Article A Deterministic Inventory Model of Deteriorating Items with Two Rates of Production, Shortages, and Variable Production Cycle International Scholarly Research Network ISRN Applied Mathematics Volume 011, Article ID 657464, 16 pages doi:10.540/011/657464 Research Article A Deterministic Inventory Model of Deteriorating Items with

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Geometric Programming for Communication Systems

Geometric Programming for Communication Systems Geometric Programming for Communication Systems Mung Chiang Electrical Engineering Department, Princeton University Tutorial Presentation August 2005 Outline Part I: Geometric Programming What s GP? Why

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering Lagrangian Duality Evelien van der Hurk DTU Management Engineering Topics Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality 2 DTU Management Engineering 42111: Static and Dynamic Optimization

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

arxiv: v1 [nlin.cd] 23 Oct 2009

arxiv: v1 [nlin.cd] 23 Oct 2009 Lyapunov exponent and natural invariant density determination of chaotic maps: An iterative maximum entropy ansatz arxiv:0910.4561v1 [nlin.cd] 23 Oct 2009 1. Introduction Parthapratim Biswas Department

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

GYULA FARKAS WOULD ALSO FEEL PROUD

GYULA FARKAS WOULD ALSO FEEL PROUD GYULA FARKAS WOULD ALSO FEEL PROUD PABLO GUERRERO-GARCÍA a, ÁNGEL SANTOS-PALOMO a a Department of Applied Mathematics, University of Málaga, 29071 Málaga, Spain (Received 26 September 2003; In final form

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

Optimization with nonnegativity constraints

Optimization with nonnegativity constraints Optimization with nonnegativity constraints Arie Verhoeven averhoev@win.tue.nl CASA Seminar, May 30, 2007 Seminar: Inverse problems 1 Introduction Yves van Gennip February 21 2 Regularization strategies

More information

1. J. Abadie, Nonlinear Programming, North Holland Publishing Company, Amsterdam, (1967).

1. J. Abadie, Nonlinear Programming, North Holland Publishing Company, Amsterdam, (1967). 1. J. Abadie, Nonlinear Programming, North Holland Publishing Company, Amsterdam, (1967). 2. K. J. Arrow, L. Hurwicz and H. Uzawa, Studies in Linear and Nonlinear Programming, Stanford, California, (1958).

More information

Support Vector Machines

Support Vector Machines Wien, June, 2010 Paul Hofmarcher, Stefan Theussl, WU Wien Hofmarcher/Theussl SVM 1/21 Linear Separable Separating Hyperplanes Non-Linear Separable Soft-Margin Hyperplanes Hofmarcher/Theussl SVM 2/21 (SVM)

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

GEOMETRIC PROGRAMMING APPROACHES OF RELIABILITY ALLOCATION

GEOMETRIC PROGRAMMING APPROACHES OF RELIABILITY ALLOCATION U.P.B. Sci. Bull., Series A, Vol. 79, Iss. 3, 2017 ISSN 1223-7027 GEOMETRIC PROGRAMMING APPROACHES OF RELIABILITY ALLOCATION Constantin Udrişte 1, Saad Abbas Abed 2 and Ionel Ţevy 3 One of the important

More information

New Class of duality models in discrete minmax fractional programming based on second-order univexities

New Class of duality models in discrete minmax fractional programming based on second-order univexities STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 5, September 017, pp 6 77. Published online in International Academic Press www.iapress.org) New Class of duality models

More information

Note on Newton Interpolation Formula

Note on Newton Interpolation Formula Int. Journal of Math. Analysis, Vol. 6, 2012, no. 50, 2459-2465 Note on Newton Interpolation Formula Ramesh Kumar Muthumalai Department of Mathematics D.G. Vaishnav College, Arumbaam Chennai-600106, Tamilnadu,

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Neutrosophic Goal Geometric Programming Problem based on Geometric Mean Method and its Application

Neutrosophic Goal Geometric Programming Problem based on Geometric Mean Method and its Application Neutrosophic Sets and Systems, Vol. 19, 2018 80 University of New Mexico Neutrosophic Goal Geometric Programming Problem based on Geometric Sahidul Islam, Tanmay Kundu Department of Mathematics, University

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

APPLYING SIGNED DISTANCE METHOD FOR FUZZY INVENTORY WITHOUT BACKORDER. Huey-Ming Lee 1 and Lily Lin 2 1 Department of Information Management

APPLYING SIGNED DISTANCE METHOD FOR FUZZY INVENTORY WITHOUT BACKORDER. Huey-Ming Lee 1 and Lily Lin 2 1 Department of Information Management International Journal of Innovative Computing, Information and Control ICIC International c 2011 ISSN 1349-4198 Volume 7, Number 6, June 2011 pp. 3523 3531 APPLYING SIGNED DISTANCE METHOD FOR FUZZY INVENTORY

More information

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79 Topic one: Production line profit maximization subject to a production rate constraint c 21 Chuan Shi Topic one: Line optimization : 22/79 Production line profit maximization The profit maximization problem

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Linear Programming. H. R. Alvarez A., Ph. D. 1

Linear Programming. H. R. Alvarez A., Ph. D. 1 Linear Programming H. R. Alvarez A., Ph. D. 1 Introduction It is a mathematical technique that allows the selection of the best course of action defining a program of feasible actions. The objective of

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Some Results in Duality

Some Results in Duality Int. J. Contemp. Math. Sciences, Vol. 4, 2009, no. 30, 1493-1501 Some Results in Duality Vanita Ben Dhagat and Savita Tiwari Jai Narain college of Technology Bairasia Road, Bhopal M.P., India vanita1_dhagat@yahoo.co.in

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Fisher Equilibrium Price with a Class of Concave Utility Functions

Fisher Equilibrium Price with a Class of Concave Utility Functions Fisher Equilibrium Price with a Class of Concave Utility Functions Ning Chen 1, Xiaotie Deng 2, Xiaoming Sun 3, and Andrew Chi-Chih Yao 4 1 Department of Computer Science & Engineering, University of Washington

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

On the Relationship Between Regression Analysis and Mathematical Programming

On the Relationship Between Regression Analysis and Mathematical Programming JOURNAL OF APPLIED MATHEMATICS AND DECISION SCIENCES, 8(2), 3 40 Copyright c 2004, Lawrence Erlbaum Associates, Inc. On the Relationship Between Regression Analysis and Mathematical Programming DONG QIAN

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Incorporating detractors into SVM classification

Incorporating detractors into SVM classification Incorporating detractors into SVM classification AGH University of Science and Technology 1 2 3 4 5 (SVM) SVM - are a set of supervised learning methods used for classification and regression SVM maximal

More information

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j

More information

Optimal reliability allocation for redundancy series-parallel systems

Optimal reliability allocation for redundancy series-parallel systems EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 10, No. 4, 2017, 877-889 ISSN 1307-5543 www.ejpam.com Published by New York Business Global Optimal reliability allocation for redundancy series-parallel

More information

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming Optim Lett (2016 10:1561 1576 DOI 10.1007/s11590-015-0967-3 ORIGINAL PAPER The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information