Amir Shahin, Payam Hanafizadeh & Milan Hladík

Size: px
Start display at page:

Download "Amir Shahin, Payam Hanafizadeh & Milan Hladík"

Transcription

1 Sensitivity analysis of linear programming in the presence of correlation among righthand side parameters or objective function coefficients Amir Shahin, Payam Hanafizadeh & Milan Hladík Central European Journal of Operations Research ISSN X Cent Eur J Oper Res DOI /s

2 Your article is protected by copyright and all rights are held exclusively by Springer- Verlag Berlin Heidelberg. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com. 1 23

3 CEJOR DOI /s ORIGINAL PAPER Sensitivity analysis of linear programming in the presence of correlation among right-hand side parameters or objective function coefficients Amir Shahin Payam Hanafizadeh Milan Hladík Springer-Verlag Berlin Heidelberg 2014 Abstract In the literature, sensitivity analysis of linear programming (LP) has been widely studied. However, only some very simple and special cases were considered when right-hand side (RHS) parameters or objective function coefficients (OFC) correlate to each other. In the presence of correlation when one parameter changes, other parameters vary, too. Here principal component analysis is used to convert the correlation of the LP homogenous parameters into functional relations. Then, using the derivatives of the functional relations, it is possible to perform classical sensitivity analysis for the LP with correlation among RHS parameters or OFC. The validity of the devised method is corroborated by open literature examples having correlation among homogenous parameters. Keywords Correlated parameters Linear programming Perturbation analysis Principal component analysis Sensitivity analysis A. Shahin Faculty of Industrial and Mechanical Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran P. Hanafizadeh (B) Department of Industrial Management, Allameh Tabataba i University, Tehran, Iran hanafizadeh@gmail.com M. Hladík Department of Applied Mathematics, Charles University in Prague, Malostranske nam. 25, Prague, Czech Republic hladik@kam.mff.cuni.cz

4 A. Shahin et al. 1 Introduction In this paper we present a novel method for performing sensitivity analysis in LP problems when there is correlation among right-hand side (RHS) parameters or objective function coefficients (OFC). We derive a new algorithm for performing this sensitivity analysis while the only required information is the mean value vector (sometimes known as nominal values) and the covariance matrix of perturbed parameters. Whilst the covariance matrix can be found through historical data or experts ideas for instance, it is assumed that it is known. Meanwhile in real world problems there happen situations when homogenous parameters are correlated to each other, in all the methods of sensitivity analysis and parametric programming available in the literature, there is no single method to consider correlation among parameters of an LP problem. As a motivation, imagine the stock market of some high-tech companies such as Apple, Samsung, and Sony. If, for instance, Samsung introduces a new cell phone, this can certainly have some effects on its shares price and when the price of Samsung shares increases, the prices of Apple or Sony shares can change to some degree. If the LP problem for an investor is to buy some shares of each company, the OFC for this problem is the share price of these companies. When the coefficient related to Samsung varies, the coefficients of Apple and Sony vary, too. This variation comes from a correlation basis and in order to study sensitivity analysis in this situation, one needs to encounter the correlation of OFC. The same can happen to RHS parameters, too. For example in transportation problem the balance constraints depict that the sum of demands and supplies ought to be the same. Now imagine that in a specific problem, demands of different sources are correlated to each other. For performing sensitivity analysis for this problem, taking the correlation of RHS parameters into account is required. In the next section, the literature of sensitivity analysis since its creation is reviewed. Then, in Sect. 3, the methodology of how the new method for performing sensitivity analysis in the presence of correlation among homogenous parameters is introduced. Section 4 starts developing the detailed formulae for both RHS parameters and OFC which is divided into two subsections: The existence of correlation among RHS parameters and OFC is discussed in the first and second subsections, respectively. After that, two examples from the literature are solved with the proposed methods in Sect. 5. In the first example, RHS parameters are correlated while in the second one OFC have correlation among each other. After performing sensitivity analysis by the proposed methods on the two examples, they are solved by the simplex method for a second time and the results are compared to each other. In Sect. 6, the limitations of this study are mentioned. Finally, the conclusion can be found in Sect. 7 and possible future research directions are presented therein. 2 Literature review Linear programming (LP) is a tool for optimizing a linear objective function subject to some linear constraints. The constraints could be related to the amount of money in hand, technological limitations, available time, number of people working some-

5 Sensitivity analysis of linear programming where, etc. LP has many applications such as Portfolio Optimization, Network Design, Internet Traffic, Call Routing, Vehicle Routing, Transportation, Scheduling, and the Diet Problem among many others. Dantzig invented the simplex algorithm for solving an LP problem in 1947 (Dantzig 1963). The simplex method finds an optimal solution for the LP problem which has entered the algorithm. After the simplex method, interior point methods (IPMs) were introduced for solving large scale problems. They solve problems in polynomial time (Roos et al. 1997; Wright 1997). Soon after the simplex algorithm was proposed by Dantzig, the question which was aroused was that what happens to the optimal solution (optimal basis in the case of the simplex method) if different parameters of the LP problem perturb. This question may come from a manager s viewpoint who wants to know how the optimal solution changes if demands, capacities, prices, etc. vary. On the other hand, there might also be measurement or forecasting errors which make one ask such questions. The study of questions like this is studied in the area of sensitivity analysis or parametric programming (Murty 1983; Bazaraa et al. 2009). Koltai and Terlaky (2000) categorized sensitivity analysis into three types in 2000 and after that Ghaffari Hadigheh and Terlaky (2006b) used descriptive names for them: basis invariancy for the first type and support set invariancy and optimal partition invariancy for the second and third types, respectively. Here we briefly discuss the invariancies. The first type finds a range for the perturbed parameter so that the optimal basis remains optimal. In other words, it finds an interval in which the optimal basis stays constant. The shortcoming of this type of sensitivity analysis is that it does not work well and gives misleading answers for degenerate optimal solutions (Murty 1983; Jansen et al. 1997; Roos et al. 1997; Illés et al. 2000; Bazaraa et al. 2009). The second type of sensitivity analysis, support set invariancy, determines the values of the perturbed parameter of the model such that the support set of the optimal solution does not change for it. To put it another way, if the variables (in a given primal and dual optimal solution) are positive, they remain positive and if they are zero, they stay zero, too (Koltai and Terlaky 2000). The last type of sensitivity invariancy, optimal partition invariancy, as mentioned in (Koltai and Terlaky 2000), discusses the rate of variations in the values of some of the model parameters in such a way that the optimal objective function value stays the same. Since the simplex algorithm provides an optimal basis, the first type of sensitivity analysis is applicable to it. Contrary to the simplex method, IPMs do not present an optimal basis, but a nonbasic and feasible solution. Hence, the second and third types of sensitivity analyses are the two invariancies useful for IPMs. To study more about optimal partition invariancy and support set invariancy you are referred to (Greenberg 1994; Berkelaar et al. 1997; Greenberg 2000; Ghaffari Hadigheh and Terlaky 2006a; Dehghan et al. 2007; Ghaffari Hadigheh et al. 2007; Ghaffari Hadigheh A and Ghaffari Hadigheh H 2008; Hladík 2010). If one asks about the shortcomings of ordinary sensitivity analysis, one of the main ones is that if two or more parameters of an LP model vary simultaneously,

6 A. Shahin et al. it does not work anymore. Bradley et al. (1977) and Wendell (1982, 1984, 1985) have tackled this drawback through two completely different ways. While the 100 % rule was introduced by Bradley et al. (1977) to consider simultaneous perturbations of more than one parameter of an LP model within the obtained ranges of ordinary sensitivity analysis, Wendell (1982, 1984, 1985) presented the tolerance approach. In the following we briefly discuss these two areas. In 1977, Bradley et al. (1977) introduced the 100 % rule to deal with simultaneous variations of the RHS parameters or OFC in the assigned ranges of ordinary sensitivity analysis (Bradley et al. 1977). If this rule is satisfied then the optimal solution remains unchanged. The noteworthy point is that if this rule is not satisfied it is not known for sure whether the current optimal solution changes or not. Sometime after Bradley et al. (1977), Cai and Cai (1997) introduced more applications of the 100 % rule for a constraint, a basic variable, and a nonbasic variable in the technological coefficient matrix of an LP problem. On the other hand, Wendell (1982, 1984, 1985) introduced a tolerance approach, which is known as symmetric tolerance. In this method, parameters of the model can vary simultaneously and independently. It also presents just one tolerance to the user and one can apply it for all the required parameters. Although the symmetric tolerance is easy to use, its range is usually small and often zero for medium or large-sized problems (Ward and Wendell 1990; Wendell 1997). Furthermore, the user loses a huge amount of information on the model. The latter shortcoming was the main reason for Arsham and Oblak (1990), Wondolowski (1991), and Wendell (1992) to expand this symmetric tolerance to a non-symmetric one. Additionally, Ward and Wendell (1990) have considered sensitivity analysis for the case when both RHS parameters and OFC vary simultaneously or for any row/column of the technological coefficient matrix. The variations of this matrix have been interesting for Gal (1995) as well. Later, Arsham tried to unify different sensitivity analyses in his paper (Arsham 2007). His approach is based on identifying constraints that are binding at a given optimal solution. Then, he puts RHS parameters in a parametric system of equations and solves it. After that, the solution is substituted into the remaining constraints and in this way the linearly constrained sensitivity region is obtained. It is notable that this region is suitable for diverse kinds of sensitivity analyses. The proposed method by Arsham (2007) can handle degenerate problems, too. After Arsham, Hladík (2011) presented another method for the unification of tolerance analysis which is more easily programmable. In his paper, Hladík (2011) computes a new tolerance quotient in which all RHS parameters or OFC can vary simultaneously and independently while the optimality invariancy stays constant. The revolutionary point about this new method is that it is applicable not only to the classical basis invariancy, but also to the novel ones such as support set or optimal partition. Furthermore, Hladík (2011, 2012) proves that it is NP-hard to determine the largest tolerances. In addition, you can find extensions of the tolerance approach to multiobjective case in Sitarz (2010), and Hladík and Sitarz (2013), among others. Moreover, the tolerance approach has been applicable for different LP problems in real world situations since its early days. For some examples you can refer to Labbé et al. (1991), Doustdargholi et al. (2009), and Singh (2010) to see the application of the tolerance

7 Sensitivity analysis of linear programming approach in a facility location problem, transportation problem, and data envelopment analysis, respectively. Now recall the question which was aroused for managers very early times after the introduction of the simplex method: What happens if. For answering such questions one needs to study sensitivity analysis or parametric programming. Since sensitivity analysis was discussed earlier, now is the time for considering parametric programming. In the middle 1950s, Gass and Saaty started parametric LP (Saaty and Gass 1954; Gass and Saaty 1955). In classical parametric programming, linear variations in one of the RHS parameters or OFC are studied when they depend on deterministic parameters. Until now there has been a concentration on algorithmic processes for parametric programming. For a study in the case of parametric programming in dealing with IPMs for the LP you can refer to (Roos et al. 1997). Additionally, Gal (1995) focuses on the conditions when a single row/column of the technological coefficient matrix depends on a specific parameter. When one talks of multiparametric programming, it is when more than one of the RHS parameters or OFC vary independently and simultaneously. In addition, variations in RHS parameters or OFC can be extended to the case when both RHS parameters and OFC vary simultaneously (Ward and Wendell 1990). Ghaffari Hadigheh and Terlaky studied a special case when there are perturbations in the RHS parameters or OFC or both at the same time (Ghaffari Hadigheh and Terlaky 2006b). Moreover, Greenberg discusses simultaneous perturbations of both RHS parameters and OFC when both the primal and dual LP problems appear in canonical form (Greenberg 2000). Greenberg (1994), Ghaffari Hadigheh and Terlaky (2006a,b), and Dehghan et al. (2007)usedifferent invariancies for the analysis of single parametric or bi-parametric programming. Hladík (2010) has also used support set invariancy and optimal partition invariancy for the purpose of multiparametric programming in LP problems. He expands the results of single parametric programming to the situations when there are multiple parameters in the RHS of constraints and in the objective function. The method presented in (Hladík 2010) makes one able to study more complex perturbations of LP compared to the simple classical sensitivity analysis. In 2011, Hanafizadeh et al. (2011) introduced a new method for LP sensitivity analysis. The presented sensitivity analysis considers local perturbation and is performed in the presence of a functional linear or nonlinear relation among RHS parameters or OFC. This is the first time in LP sensitivity analysis when RHS parameters or OFC depend on each other on a functional relation basis. In the last paragraph of that paper under the title of conclusion and future research directions you can find this: The research on sensitivity analysis could also benefit from the extension of the current method to multi-dimensional functional relations. Finally, the proposed approach can be extended to cases with probabilistic information on homogenous parameters (e.g., when the parameters are correlated then the change of a specific parameter causes the simultaneous change of others). Here is when the story of the current paper begins. Before beginning the main method and continuing within examples we need to have a short review on pros and cons of different post optimality analyses through Table 1.

8 A. Shahin et al. Table 1 Pros and Cons table for different post optimality analyses Pros Cons Ordinary sensitivity analysis This method presents one range for variations of each RHS parameter or OFC. It considers changes in only one of the parameters. This sensitivity analysis is done by the simplex method The 100 % rule Considers simultaneous perturbation of RHS parameters or OFC. If this rule is satisfied then the current optimal basis remains unchanged Symmetric tolerance analysis Parameters of the model can vary simultaneously and independently. It presents only one parameter to the user and it can be applied to all RHS parameters or OFC. It is also simple and easy to use Non symmetric tolerance analysis Considers individual percentage change for every RHS parameters or OFC. Some tolerances may be large Parametric programming If RHS parameters or OFC depend only on one deterministic parameter this method is useful Multiparametric programming If RHS parameters or OFC vary simultaneously and independently, this method is applicable Sensitivity analysis in the presence of a functional relation Sensitivity analysis in the presence of a covariance matrix (the method studied here) Considers perturbation among RHS parameters or OFC for a linear functional relation, or local perturbation for a nonlinear functional relation among RHS parameters or OFC When RHS parameters or OFC are correlated to each other and the only available information is the covariance matrix and the mean value vector, this method is applicable for sensitivity analysis No simultaneous perturbation of RHS parameters or OFC is considered If this rule is not satisfied, it is not predictable whether the current optimal basis remains optimal or not The tolerance is usually small and for medium and large scale problems it is often zero. It loses a lot of information on the model. Moreover, finding the largest tolerance is an NP-hard problem Some tolerances may not be so large No simultaneous perturbation of RHS parameters or OFC is considered Not considering correlation or functional relations among RHS parameters or OFC Not considering correlation or multi-dimensional functional relations among RHS parameters or OFC and also no simultaneous variation of RHS parameters or OFC is considered Correlation between the elements of technological coefficient matrix is not considered in this method. Furthermore, correlation among RHS parameters and OFC at the same time is not studied here

9 Sensitivity analysis of linear programming 3 Methodology For performing sensitivity analysis in the presence of correlation among homogenous parameters, namely RHS parameters or OFC, first of all we solve the LP problem within the simplex method and find an optimal solution and objective function value. After that, we use the covariance matrix as the input of a multivariate statistical method called principal component analysis (PCA) in order to convert correlated parameters (RHS parameters or OFC) into independent ones introduced by some functional relations. Then, by knowing variations in one of the RHS parameters or OFC, and using derivatives of the functional relations we can find out the amount of variations in other RHS parameters or OFC. Having the variations of all RHS parameters or OFC, we implement the 100 % rule to test the current optimal basis and see if it stays optimal or not. If this rule is not satisfied we cannot go further and cannot say if the current optimal basis changes or not. This is due to the 100 % rule limitation which does not enable us to find out whether the current optimal basis remains unchanged or not, in the case when it is not satisfied. However, if the 100 % rule is satisfied, we know that the current optimal solution still remains optimal even though all the RHS parameters or OFC have changed. The next step is to calculate the values of objective function and basic variables (for variations in RHS parameters) or shadow prices (for variations in OFC). We do this through some relations derived from the topic of sensitivity analysis in the presence of a functional relation (Hanafizadeh et al. 2011) which presents some relations for calculating the new values we are looking for. 4 Sensitivity analysis with correlation among homogenous parameters In this section a method for performing sensitivity analysis of LP with correlation among RHS parameters or OFC is presented. It is noteworthy that only local perturbations are considered in this study. To put it another way the interval in which the parameters change is really small and they fall within an ε-neighborhood of the estimated parameters. Moreover, a basic optimal non-degenerate solution is assumed to be available and if the optimal solution is non-basic, the presented method may not be applicable. Now consider the following LP problem: Min c T x s.t. Ax = b, x 0, (1) where x is a vector with n variables, A is an m n constraint matrix, c is the OFC vector with n variables, while b is the RHS vector with m parameters. We use small bold and italic letters to depict vectors while capital bold letters and small bold italic letters with a superscript (e.g. j) are used to illustrate matrices and jth column, respectively. In addition, scalars are illustrated by small italic letters with subscripts. We continue this section within two subsections. In the first one correlation among RHS parameters is discussed, meanwhile, in the second part correlation among OFC is studied in detail.

10 A. Shahin et al. 4.1 Correlation among RHS parameters The purpose of this subsection is to enable the reader to perform sensitivity analysis of an LP in the presence of correlation among RHS parameters when one of the values of the RHS parameters changes. By using the term sensitivity analysis, we mean to answer the question what happens if one of the parameters changes and by what slope it affects the objective function and the optimal solution. Sensitivity analysis calculates the slopes of changes in objective function and optimal solution with respect to small changes in one of the parameters. It is important to calculate the slopes without solving the problem again and just by using sensitivity analysis relations (Bazaraa et al. 2009). For solving problem (1) we use the average values of vector b to calculate the optimal values of basic variables and objective function. If problem (1) is solved based on the primary estimation of the average vector of b, then the optimal value and the optimal solution is calculated as below: z = c T B B 1 b, x B = B 1 b, where z is the optimal value of the objective function while parameters c B and variables x B depict the OFC vector of basic variables and the optimal solution of basic variables, respectively. Generally speaking, the subscript B is a representative of basic variables and the superscript is related to the optimal value. In addition, B and N are the constraint matrices of basic and non-basic variables, respectively. If there is not a relationship between the components of b, wehave: dz db i = c T B B 1 e i or dz db i = v i for i = 1, 2,...,m where v i is the value of the dual variable corresponding to the shadow price of the ith constraint. Moreover: dx B i db k = r i,k for k = 1, 2,...,m where r i,k is the (i,k)th component of the B 1 simplex optimal tableau. The two above relations depict the sensitivities of the objective function and the value of basic variable to the changes in parameter b i, respectively. For performing sensitivity analysis on LP problems in the presence of correlation among RHS parameters we follow the steps mentioned in Fig. 1. It is assumed that the m parameters of vector b are correlated. Due to existed correlation among RHS parameters, any changes in one parameter make others change, too. Therefore, the conventional sensitivity analysis methods are not applicable in the presence of correlation among RHS parameters As mentioned in (Hanafizadeh et al. 2011), when there is a functional relation between RHS parameters, one can perform sensitivity analysis under such conditions

11 Sensitivity analysis of linear programming Start In problem (1) using s, calculate the optimal solution. Use the covariance matrix of b to calculate s through PCA Having calculate other s using relation (8) YES We find out that the current optimal solution remains optimal. Calculate and s using relations (11) and (12). Is the 100% rule satisfied for the changes of the previous part? NO We cannot perform sensitivity analysis for this situation due to the constraints of the 100% rule. End Fig. 1 The flowchart of sensitivity analysis in the presence of correlation between RHS parameters by relations mentioned in that paper. Here, PCA is used to enable us to make functional relations between uncertain correlated parameters. In other words, sensitivity analysis with correlated parameters changes into sensitivity analysis with a functional relation, which is explained in (Hanafizadeh et al. 2011) how one can perform such a sensitivity analysis. Principal component analysis is a multi-variant method in statistics. When there are m correlated parameters (here called b), PCA uses eigenvalues and defines m relations between the m correlated parameters so that the results of the relations are independent. Note that since a covariance matrix is always positive definite, the eigenvalues of a covariance matrix are always positive (Johnson and Wichern 2007). By using PCA, m independent parameters are achieved, which are called u i s, here. These parameters are as follows: u 1 = λ 11 b 1 + λ 12 b 2 + +λ 1m b m u 2 = λ 21 b 1 + λ 22 b 2 + +λ 2m b m. u m = λ m1 b 1 + λ m2 b 2 + +λ mm b m (2)

12 A. Shahin et al. where λ ij s are the entries of the jth eigenvector, which is obtained through the covariance matrix of b in PCA. It is noteworthy that the jth eigenvector is:. λ 1 j λ 2 j.. λ mj In the above relation, the average amounts of b i s are employed to produce the independent parameters u i. In other words, u i s are corresponding to the average of b i s and the only difference is that u i s are independent while b i s are correlated to each other. The independence of u i s comes from PCA and it is proven therein (Johnson and Wichern 2007). Moreover, as it can clearly be seen, there are m variables and m relations and thus system (2) is square. This is because we are aiming to replace b i s with u i s. Thus, the number of b i s and u i s are supposed to be the same. One of the applications of PCA is reducing the size of data in hand (Johnson and Wichern 2007). That is to say, PCA is used to decrease the dimensions of a problem. However, we do not use PCA for this purpose. As Johnson and Wichern (2007) explain, when there are (for instance say) m components for reproducing the total variability of a system, usually, much of this variability can be in hand by only (for instance say) k component (k < m). Here, it is said that there are k principal components and they can replace the initial m variables of the original data set. Furthermore, principal components are special linear combinations of m random variables (Johnson and Wichern 2007). The combinations are, despite the initial variables, independent. We apply PCA for finding new independent linear combinations correspondent to the correlated data. After that, instead of having the first k components of the total m components, we use all of the derived components. Put it differently, we use all of the components instead of only having the principal ones. Suppose that one of the parameters of the vector b in relation (2) changes. For example the average of b k changes by b k. First of all the effect of this change on other parameters of vector b should be calculated. Because of the correlation among RHS parameters, when one of the parameters of vector b varies, automatically other parameters vary, too. This makes simultaneous variations between all uncertain correlated RHS parameters. Additionally, PCA converts the correlated random parameters of vector b, into independent parameters of vector u. When we put nominal values for the components of vector b, such as b, in order to solve an LP problem, parameters u i (i = 1, 2,...,m) take values, too. Thus, until here parameters u i (i = 1, 2,...,m) have taken values corresponding to nominal values of b. On the other hand, the equations of PCA play the role of functional relations and since the optimal solution is calculated with respect to nominal values of b, when one of the parameters of b varies, the functional relations force at least another parameter of vector b to change, too. For instance by variation of b k, the amount of variations of another b i (i = k) needs to be calculated so that the functional relations (2) remain satisfied. Using relation (2) we aim to calculate b i (i = k). For the sake of simplicity without losing generality, it is assumed that b 1 varies. Therefore, variations of other RHS parameters are calculated as follows:

13 Sensitivity analysis of linear programming b i = db i db 1 b 1 (3) So it is only needed to calculate db i db 1 s for the above relation as follows: Also from relation (2) wehave: db i db 1 = b i u 1. u 1 b 1 + b i u 2. u 2 b b i u m. u m b 1 (4) u k = λ k1 b 1 + λ k2 b 2 + +λ km b m 1 λ ki u k And also from relation (2) wehave: = λ k1 b 1 + λ k2 b λ k,i 1 b i 1 + b i λ ki λ ki λ ki + λ k,i+1 b i λ km b m b i = 1 (5) λ ki λ ki u k λ ki u k = λ k1 b 1 + λ k2 b 2 + +λ km b m u k b 1 = λ k1 (6) Using (5) and (6)into(4), it is found out that: db i db 1 = λ 11 λ 1i + λ 21 λ 2i + + λ m1 λ mi (7) Therefore by using (7) in(3), the following relation is achieved: b i = [ λ11 + λ λ ] m1 b 1 (8) λ 1i λ 2i λ mi If in problem (1), the amount of b 1 changes by b 1 then the amount of variation in other parameters ( b i for all i = 2, 3,...,m ) can be calculated using relation (8). In an LP problem when RHS parameters vary, feasible solutions can become infeasible or the optimal value of the objective function and the optimal value of the basic variables can change. Now the 100 % rule should be used to find out if the current basis of the optimal solution of problem (1) still remains optimal or not. If the answer is yes, z and x B i need to be calculated. Here let us introduce a functional relation among components of RHS parameters defined G(b) = 0. The domain of G(, ) = 0 is defined on an ε-neighborhood of b, namely, N ε ( b 0 ) ={b : b b 0 <ε}, where b 0 is the initial estimation of the perturbed parameters, called nominal value, and ε-neighborhood of b is not empty.

14 A. Shahin et al. The range space of this functional relation is one-dimensional. We assume that the functional relation G(b) = 0 is differentiable and continuous with respect to b. Based on the topic of sensitivity analysis in the presence of a functional relation (Hanafizadeh et al. 2011), when there is a functional relation between the components of vector b, like: G ( b 1,...,b m ) = 0 the classical methods of sensitivity analysis do not provide us with the correct results. This is because a functional relation between RHS parameters exists, and so when one of the parameters varies, other parameters vary, too. Simultaneous variation of some of the RHS parameters in the presence of a functional relation is not considered in the classical methods of sensitivity analysis. Under such a functional relation, when the amount of variation in parameter b 1 is b 1, Hanafizadeh et al. (2011) find out these relations: [ z = dz db 2 db m b 1 = v 1 + v 2 + +v m db 1 db 1 db 1 [ x B i = dx B i db 1 b 1 = ] b 1 (9) ] ri,1 + r i,2 db 2 + +r db m i,m b 1 (10) db 1 db 1 where v i is the value of dual variable corresponding to the shadow price of the ith constraint. And r i,k is the (i,k)th component of the B 1 simplex optimal tableau and db i db 1 s are found from: G ( b 1,...,b m ) = 0. But here, since relation (2) defines the functional relations between correlated RHS parameters of the LP problem, we must find db i db 1 s from relation (2). As mentioned earlier, one can use relation (7) to find db i db 1 out of the group-functional relation (2). Here it should be noted that the rate of changes of x B i and z with respect to b i sis not a smooth function. To put it another way, this rate is not continuously differentiable and none of relations (9) or(10) can be defined in break points and only in smooth ranges are valid. This is the reason why this method only considers small variations of parameters around its nominal value, i.e. on an ε-neighborhood of b. We need to replace relation (7) in relations (9) and (10) and then the following relations are obtained: z = [ ( λ11 v 1 + v 2 + λ λ ) m1 λ 12 λ 22 λ m2 + +v m ( λ11 λ 1m + λ 21 λ 2m + + λ m1 λ mm )] b 1 (11)

15 Sensitivity analysis of linear programming x B i = [ ( ri,1 + r i,2 λ11 + +r i,m + λ λ ) m1 λ 12 λ 22 λ m2 ( λ11 + λ λ m1 λ 1m λ 2m λ mm )] b 1 (12) 4.2 Correlation among OFC The aim of this section is to perform sensitivity analysis on an LP problem in the presence of correlation among OFC when one of the values of the OFC changes. The definition of sensitivity analysis is mentioned in the previous subsection. The only difference is that when OFC vary, it is required to calculate shadow prices as well. To solve problem (1) we use the average values of vector c in order to calculate the optimal values of basic variables and objective function. If problem (1) is solved based on the primary estimation of the average vector of c, we get the optimal value and the optimal solution as the following: z = c T B B 1 b, x B = B 1 b, z j c j = c B y j c j = c Bi y ij c j z j c j is the shadow price of each of the RHS parameters (resources). Here the subscript j refers to the jth RHS parameter. Shadow prices are found in the row of objective function, under the column of slack variables of RHS parameters in the optimal tableau of the simplex method. The number of shadow prices in each problem is equal to the number of constraints. They are the economic value of each resource and each of them represents the amount of variation in the optimal objective function value due to one number of variation in the corresponding RHS parameter. Put it another way, the value of z j c j is the amount of variation in the value of the optimal objective function, if b j changes by one unit. If there is not a relationship between the components of vector c, wehave: dz { x = Bi if c k = c Bi dc k 0 ifc k corresponds to nonbasic variables d ( ) 0 if k corresponds to non basic variables and k = j z j c j y = k, j if k corresponds to basic variables and k = j dc k 0 if k corresponds to basic variables and k = j 1 if k corresponds to non basic variables and k = j (13) (14) where y k, j is the component of B 1 N simplex optimal tableau for the non-basic variables in front of row c k and under column z j c j. The above two equations illustrate sensitivities of the objective function and the shadow price of each one variable to the variations of one of the coefficients of the objective function (such as c k ), respectively.

16 A. Shahin et al. Start In problem (1) using s, calculate the optimal solution. Use the covariance matrix of to calculate s through PCA Having calculate other s using relation (21) YES We find out that the current optimal solution remains optimal. Calculate and s using relations (24) and (25). Is the 100% rule satisfied for the changes of the previous part? NO We cannot perform sensitivity analysis for this situation due to the constraints of the 100% rule. End Fig. 2 The flowchart of sensitivity analysis in the presence of correlation between OFC For performing sensitivity analysis on LP problems in the presence of correlation among OFC following the steps in Fig. 2 is required. In this part, we assume that n parameters of vector c are correlated. Thus, if each component of c changes, the correlation of OFC causes all other components vary, too. This is why the ordinary sensitivity analysis methods do not work when such correlation among OFC exists. Just like the previous subsection, when there exists a functional relation between OFC, the relations derived in (Hanafizadeh et al. 2011) enable us to perform sensitivity analysis. In this section, PCA is applied in order to make functional relations between uncertain OFC. In other words, firstly we change sensitivity analysis with correlated OFC into sensitivity analysis with functional relations. Secondly by implying relations of sensitivity analysis with a functional relation taken from literature (Hanafizadeh et al. 2011) we become able to perform such a sensitivity analysis. Similar to the previous subsection, we use PCA to obtain n relations between n correlated parameters of the OFC so that the n obtained relations are independent. PCA is used and then n independent parameters are achieved. We call the n independent parameters w j s. These parameters are found through the following:

17 Sensitivity analysis of linear programming w 1 = γ 11 c 1 + γ 12 c 2 + +γ 1n c n w 2 = γ 21 c 1 + γ 22 c 2 + +γ 2n c n. w n = γ n1 c 1 + γ n2 c 2 + +γ nn c n (15) where γ ij s are the entries of the jth eigenvector, which is obtained through the covariance matrix of c in PCA. It is notable that the jth eigenvector is:. γ 1 j γ 2 j. In relation (15), the average vector of c j s is used in order to produce w j s. Like Sect. 4.1, here we emphasize that the independence of PCA parameters (w j s here) is proven in (Johnson and Wichern 2007). Additionally, in relation (15) there are n relations and n variables which make system (15), square. The reason for this is that our purpose is to replace c j s with w j s and thus, the number of c j s necessarily needs to be the same as the number of w j s. Here let us very briefly review that one of the main goals of using PCA is reducing the dimensions of a problem, whilst in this paper we use PCA neither for decreasing the size nor for identifying principal components, but we use PCA in order to find the linear combinations between different independent components. Here let us turn our thoughts to the point that the outcome components of PCA linear combinations are independent of each other while the primary variables c j s are correlated to each other. In this subsection we use PCA and find some independent linear combinations correspondent to the correlated c j s. Then, not as the normal procedure of PCA for choosing the first k principal components, we pick up all of the independent components. Suppose that one of the components of vector c in relation (15) changes to some degree. For instance, the average of c k changes by c k. Firstly, the effect that this variation can have on other components of vector c should be found out. Since there is correlation among OFC, variation in one of the components of vector c, makes other correlated components vary, too. This is the reason for simultaneous variations between all uncertain correlated OFC. In addition, exactly like the previous subsection, we know that PCA converts the correlated random parameters of vector c, into independent parameters of vector w. When we put nominal values for the components of vector c, such as c, for solving an LP problem, parameters w j ( j = 1, 2,...,n) have taken values corresponding to nominal values of c. On the other hand, the equations of PCA play the role of functional relations and since the optimal solution is calculated with respect to nominal values of c, by variation in one of the parameters of c, the functional relations force at least another parameter of vector c to change, too. For example by variation of c k the amount of variations of other c j s ( j = k) need to be calculated so that the functional relation (15) remains satisfied. When c k varies by implying relation (15) other components of c j, where j = k, are easily calculated. For simplicity, without losing generality, we assume that c 1 is the component of vector c which varies. Thus, one can calculate variations of other OFC as below:. γ nj

18 A. Shahin et al. c j = dc j dc 1 c 1 (16) Thus, we just need to calculate dc j dc 1 s out of relation (16) asbelow: From relation (15) wehave: dc j dc 1 = c j w 1. w 1 c 1 + c j w 2. w 2 c c j w n. w n c 1 (17) w k = γ k1 c 1 + γ k2 c 2 + +γ kn c n 1 γ kj w k = γ k1 γ kj c 1 + γ k2 γ kj c γ k, j 1 c j 1 + c j + γ k, j+1 c j γ kn c n c j = 1 γ kj γ kj γ kj w k γ kj (18) And also from relation (15) wehave: w k = γ k1 c 1 + γ k2 c 2 + +γ kn c n w k c 1 = γ k1 (19) By using relations (18) and (19) into(17), we have: dc j dc 1 = γ 11 γ 1 j + γ 21 γ 2 j + + γ n1 γ nj (20) Therefore, relation (20) isusedin(16) and then we have the upcoming relation (21): c j = [ γ11 + γ γ ] n1 c 1 (21) γ 1 j γ 2 j γ nj In problem (1), when the amount of c 1 changes by c 1, one can find the amount of variation in other parameters of vector c ( c j for all j = 2, 3,...,n ) through relation (21). In an LP problem, if OFC vary, feasible solutions may change into infeasible or the optimal value of the objective function and shadow prices can change. Now is the time for using the 100 % rule in order to find out whether the current basis of the optimal solution in problem (1) stays optimal under variations of OFC or not. If it stays optimal, then calculating z and ( z j c j ) is required. In the same manner as the previous subsection, here we introduce a functional relation among components of OFC which is called H (c) = 0. The domain of H (, ) = 0 is defined on an ε-neighborhood of c, called N ε ( c 0 ) ={c : c c 0 <ε}, where c 0 is the initial estimation of the perturbed parameters, called nominal value, and ε-neighborhood of c is not empty. The range space of this functional relation is onedimensional. It is assumed that the functional relation H (c) = 0 is differentiable and continuous with respect to c.

19 Sensitivity analysis of linear programming Talking of the basis of sensitivity analysis in the presence of a functional relation in literature (Hanafizadeh et al. 2011), when there is a functional relation between components of vector c, such as: H (c 1,...,c n ) = 0 the classical sensitivity analysis methods do not work. The reason is that in the presence of a functional relation between OFC, if one of the coefficients varies, others vary, too. The condition of simultaneous variation in some of OFC in the presence of a functional relation has not been studied in the area of classical sensitivity analysis methods. For this functional relation and when variation in parameter c 1 is c 1,the following relations exist (Hanafizadeh et al. 2011): [ z = dz z ] c 1 = + z dc z dc n c 1 (22) dc 1 c 1 c 2 dc 1 c n dc 1 ( ) d ( ) z j c j z j c j = c 1 dc [ 1 ( ) z j c j = + ( ) z j c j dc ( ) ] z j c j dc n c 1 c 1 c 2 dc 1 c n dc 1 (23) In the two above relations dc j dc 1 s are found from: H (c 1,...,c n ) = 0 Relation (15) defines the functional relations between correlated OFC of the LP problem. Thus, it is possible to calculate dc j dc 1 s from that relation. Earlier in this paper relation (20) has provided us with dc j dc 1 s derived from relation (15) and we can use it here. Like the previous subsection, the rate of changes of z and (z j c j ) with respect to c j is not a smooth function. In other words, this rate is not continuously differentiable and relations (22) and (23) are not defined in break points and are solely valid in smooth ranges. This is the reason why the proposed method by this paper performs only for small variations of parameters around its nominal value, i.e. on an ε-neighborhood of c. By replacing relation (20) in relations (22) and (23) the following two relations are obtained: z = [ z ( + z γ11 c 1 c z c n + γ γ ) n1 γ 12 γ 22 γ n2 ( γ11 + γ γ n1 γ 1n γ 2n γ nn )] c 1 (24)

20 A. Shahin et al. ( [ ( ) ) z j c j z j c j = + ( ) ( z j c j γ11 + γ γ ) n1 c 1 c 2 γ 12 γ 22 γ n2 + + ( ) ( z j c j γ11 + γ γ ) ] n1 c 1 (25) c n γ 1n γ 2n γ nn 5 Numerical examples In order to exemplify our proposed methods, two numerical examples are solved in this section. Then, the results of our methods are compared with the results of applying desired changed parameters to the original LP problem solved by the simplex method (performed by Lindo). 5.1 Example 1 This example is taken from exercises of LP on Purdue University Website ( engineering.purdue.edu/~engelb/abe565/week5.htm#blending Example Blending Problem). A steel company is hired to make a new kind of steel with the properties mentioned in Table 2. For producing this new type of steel, the materials mentioned in Table 3 are available for the steel company. We know that a 1 ton (2,000 lb) batch must be blended so that the properties of Table 2 be satisfied. The question is to find the best blend of materials for using in the production of the new steel. Each decision variable depicts the amount of available material to be used in the new steel mixture. Let x 1 represent Iron 1, x 2 represent Iron 2, x 3 represent Fe-Sil 1, x 4 represent Fe-Sil 2, x 5 represent Alloy 1, x 6 represent Alloy 2, x 7 represent Alloy 3, x 8 represent Carbide, x 9 represent Steel 1, x 10 represent Steel 2, and x 11 represent Steel 3. The objective function is: Minimize z = (0.03) x 1 +(0.0645) x 2 + (0.065) x 3 + (0.061) x 4 + (0.10) x 5 + (0.13) x 6 + (0.119) x 7 + (0.08)x 8 + (0.021) x 9 + (0.02) x 10 + (0.0195)x 11 Table 2 Steel properties At least (%) Not more than (%) Carbon content Chrome content Manganese content Silicon content

21 Sensitivity analysis of linear programming Table 3 Available material for producing the new type of steel Cost /lb Carbon (%) Chrome (%) Manganese (%) Silicon (%) Amount available Iron Unlimited Iron Unlimited Fe-Sil Unlimited Fe-Sil Unlimited Alloy Unlimited Alloy Unlimited Alloy Unlimited Carbide lb Steel lb Steel lb Steel lb Upper and lower content limits (quality limits) make the constraints. Moreover, material availability also makes some more constraints and the total weight of materials required to be used must be a total of 2,000 lb. Therefore, the constraints are as the followings: (0.04)x 1 + (0.15)x 8 + (0.004)x 9 + (0.001)x 10 + (0.001)x 11 (2,000) (0.035) = 70 (0.10)x 2 + (0.20)x 6 + (0.08)x 7 (2,000) (0.0045) = 9 (0.009)x 1 + (0.045)x 2 + (0.60)x 5 + (0.09)x 6 + (0.33)x 7 +(0.009)x 9 + (0.003)x 10 + (0.003)x 11 (2,000) (0.0165) = 33 (0.0225)x 1 + (0.15)x 2 + (0.45)x 3 + (0.42)x 4 + (0.18)x 5 + (0.30)x 6 +(0.25)x 7 + (0.30)x 8 (2,000) (0.03) = 60 (0.04)x 1 + (0.15)x 8 + (0.004)x 9 + (0.001)x 10 + (0.001)x 11 (2,000) (0.03) = 60 (0.10)x 2 + (0.20)x 6 + (0.08)x 7 (2,000) (0.003) = 6 (0.009)x 1 + (0.045)x 2 + (0.60)x 5 + (0.09)x 6 + (0.33)x 7 + (0.009)x 9 +(0.003)x 10 + (0.003)x 11 (2,000) (0.0135) = 27 (0.0225)x 1 + (0.15)x 2 + (0.45)x 3 + (0.42)x 4 + (0.18)x 5 + (0.30)x 6 +(0.25)x 7 + (0.30)x 8 (2,000) (0.027) = 54 x 8 20 x x x x 1 + x 2 + x 3 + x 4 + x 5 + x 6 + x 7 + x 8 + x 9 + x 10 + x 11 = 2,000 x 1, x 2, x 3, x 4, x 5, x 6, x 7, x 8, x 9, x 10, x 11 0 Assume that the 13 RHS parameters are correlated with each other. First of all 13 arrays (100 1) of random numbers are created with MATLAB software. The first

22 A. Shahin et al. series of random numbers should have a mean of 70, the second one a mean of 9, the third one a mean of 33, and so on (based on the amount of RHS parameters). Since random numbers are used, the mean value is not exactly the same as what was expected. Thus the following mean values are achieved: b 1 = , b 2 = , b 3 = , b 4 = , b 5 = , b 6 = , b 7 = , b 8 = , b 9 = , b 10 = , b 11 = , b 12 = , b 13 = 2,000.3 If the problem with the new RHS parameters as above is solved by Lindo, we have. z = The optimal solution and reduced costs are illustrated in Table 4. The RHS parameters ranges in which the basis is unchanged is as shown in Table 5. Table 4 The optimal solution and reduced costs of Example 1 Variable Value Reduced cost x1 1, x x x x x x x x x x Table 5 Allowable increase and decrease for RHS parameters in Example 1 Row Current RHS parameters Allowable increase Allowable decrease Infinity Infinity Infinity Infinity Infinity Infinity ,

23 Sensitivity analysis of linear programming Table 6 Slack or surplus variables and dual prices for the optimal solution of Example 1 Row Slack or surplus Dual prices After solving the problem, the slack or surplus variables and dual prices are also found and shown in Table 6. Using correlated data (made by MATLAB in this example) or the covariance matrix as the input of PCA, one can calculate λ ij s, which can be found in Table 7. Assume that b 1 = 0.002, using relation (8) wehave: b 2 = , b 3 = , b 4 = , b 5 = , b 6 = , b 7 = , b 8 = , b 9 = , b 10 = , b 11 = , b 12 = , b 13 = All of the above changes are in the accepted range for which the basis is unchanged (Look at Table 5). Now the 100 % rule should be tested. If this rule is satisfied, calculating the values of the new z and x B i is the next step. 13 i=1 b i b max i By replacing the values of b i and b max i in the above relation, it can be seen that the 100 % rule is satisfied. Therefore, the current optimal basis remains unchanged. Relations (11) and (12) can be used to calculate the changes in z and xb 1 s. z = , xb 1 = x1 = , x B 2 = x2 = xb 3 = x4 = , x B 4 = x5 = , x B 5 = x9 = xb 6 = x10 = , x B 7 = x11 =

1 Simplex and Matrices

1 Simplex and Matrices 1 Simplex and Matrices We will begin with a review of matrix multiplication. A matrix is simply an array of numbers. If a given array has m rows and n columns, then it is called an m n (or m-by-n) matrix.

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

MATH 445/545 Test 1 Spring 2016

MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005 Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization

More information

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small. 0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture Operations Research is the branch of science dealing with techniques for optimizing the performance of systems. System is any organization,

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

More information

56:171 Operations Research Fall 1998

56:171 Operations Research Fall 1998 56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz

More information

Optimisation. 3/10/2010 Tibor Illés Optimisation

Optimisation. 3/10/2010 Tibor Illés Optimisation Optimisation Lectures 3 & 4: Linear Programming Problem Formulation Different forms of problems, elements of the simplex algorithm and sensitivity analysis Lecturer: Tibor Illés tibor.illes@strath.ac.uk

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Mathematical Programming 19 (1980) 213-219. North-Holland Publishing Company COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING + Katta G. MURTY The University of Michigan, Ann Arbor, MI, U.S.A.

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

Answer the following questions: Q1: Choose the correct answer ( 20 Points ): Benha University Final Exam. (ختلفات) Class: 2 rd Year Students Subject: Operations Research Faculty of Computers & Informatics Date: - / 5 / 2017 Time: 3 hours Examiner: Dr. El-Sayed Badr Answer the following

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

April 2003 Mathematics 340 Name Page 2 of 12 pages

April 2003 Mathematics 340 Name Page 2 of 12 pages April 2003 Mathematics 340 Name Page 2 of 12 pages Marks [8] 1. Consider the following tableau for a standard primal linear programming problem. z x 1 x 2 x 3 s 1 s 2 rhs 1 0 p 0 5 3 14 = z 0 1 q 0 1 0

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard

More information

Another max flow application: baseball

Another max flow application: baseball CS124 Lecture 16 Spring 2018 Another max flow application: baseball Suppose there are n baseball teams, and team 1 is our favorite. It is the middle of baseball season, and some games have been played

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

Sensitivity Analysis and Duality in LP

Sensitivity Analysis and Duality in LP Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker 56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Teaching Duality in Linear Programming - the Multiplier Approach

Teaching Duality in Linear Programming - the Multiplier Approach Teaching Duality in Linear Programming - the Multiplier Approach Jens Clausen DIKU, Department of Computer Science University of Copenhagen Universitetsparken 1 DK 2100 Copenhagen Ø June 3, 1998 Abstract

More information

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

CSC Design and Analysis of Algorithms. LP Shader Electronics Example CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

(b) For the change in c 1, use the row corresponding to x 1. The new Row 0 is therefore: 5 + 6

(b) For the change in c 1, use the row corresponding to x 1. The new Row 0 is therefore: 5 + 6 Chapter Review Solutions. Write the LP in normal form, and the optimal tableau is given in the text (to the right): x x x rhs y y 8 y 5 x x x s s s rhs / 5/ 7/ 9 / / 5/ / / / (a) For the dual, just go

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Introduction to linear programming using LEGO.

Introduction to linear programming using LEGO. Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMING: SOME CASES AND LECTURE NOTES

SENSITIVITY ANALYSIS IN LINEAR PROGRAMING: SOME CASES AND LECTURE NOTES SENSITIVITY ANALYSIS IN LINEAR PROGRAMING: SOME CASES AND LECTURE NOTES Samih Antoine Azar, Haigazian University CASE DESCRIPTION This paper presents case studies and lecture notes on a specific constituent

More information

Introduction to Operations Research

Introduction to Operations Research Introduction to Operations Research (Week 4: Linear Programming: More on Simplex and Post-Optimality) José Rui Figueira Instituto Superior Técnico Universidade de Lisboa (figueira@tecnico.ulisboa.pt) March

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

x 4 = 40 +2x 5 +6x x 6 x 1 = 10 2x x 6 x 3 = 20 +x 5 x x 6 z = 540 3x 5 x 2 3x 6 x 4 x 5 x 6 x x

x 4 = 40 +2x 5 +6x x 6 x 1 = 10 2x x 6 x 3 = 20 +x 5 x x 6 z = 540 3x 5 x 2 3x 6 x 4 x 5 x 6 x x MATH 4 A Sensitivity Analysis Example from lectures The following examples have been sometimes given in lectures and so the fractions are rather unpleasant for testing purposes. Note that each question

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Review Questions, Final Exam

Review Questions, Final Exam Review Questions, Final Exam A few general questions 1. What does the Representation Theorem say (in linear programming)? 2. What is the Fundamental Theorem of Linear Programming? 3. What is the main idea

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Dual Basic Solutions Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Observation 5.7. AbasisB yields min c T x max p T b s.t. A x = b s.t. p T A apple c T x 0 aprimalbasicsolutiongivenbyx

More information

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T. SAMPLE QUESTIONS. (a) We first set up some constant vectors for our constraints. Let b = (30, 0, 40, 0, 0) T, c = (60, 000, 30, 600, 900) T. Then we set up variables x ij, where i, j and i + j 6. By using

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

3E4: Modelling Choice

3E4: Modelling Choice 3E4: Modelling Choice Lecture 6 Goal Programming Multiple Objective Optimisation Portfolio Optimisation Announcements Supervision 2 To be held by the end of next week Present your solutions to all Lecture

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

UNIVERSITY OF KWA-ZULU NATAL

UNIVERSITY OF KWA-ZULU NATAL UNIVERSITY OF KWA-ZULU NATAL EXAMINATIONS: June 006 Solutions Subject, course and code: Mathematics 34 MATH34P Multiple Choice Answers. B. B 3. E 4. E 5. C 6. A 7. A 8. C 9. A 0. D. C. A 3. D 4. E 5. B

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

56:171 Fall 2002 Operations Research Quizzes with Solutions

56:171 Fall 2002 Operations Research Quizzes with Solutions 56:7 Fall Operations Research Quizzes with Solutions Instructor: D. L. Bricker University of Iowa Dept. of Mechanical & Industrial Engineering Note: In most cases, each quiz is available in several versions!

More information

SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: TOTAL GRAND

SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: TOTAL GRAND 1 56:270 LINEAR PROGRAMMING FINAL EXAMINATION - MAY 17, 1985 SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: 1 2 3 4 TOTAL GRAND

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

Parametric LP Analysis

Parametric LP Analysis Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 3-10-2010 Parametric LP Analysis Allen Holder Rose-Hulman Institute of Technology, holder@rose-hulman.edu

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

56:171 Operations Research Midterm Exam--15 October 2002

56:171 Operations Research Midterm Exam--15 October 2002 Name 56:171 Operations Research Midterm Exam--15 October 2002 Possible Score 1. True/False 25 _ 2. LP sensitivity analysis 25 _ 3. Transportation problem 15 _ 4. LP tableaux 15 _ Total 80 _ Part I: True(+)

More information

February 17, Simplex Method Continued

February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information