THE COMPROMISE DECISION SUPPORT PROBLEM AND THE ADAPTIVE LINEAR PROGRAMMING ALGORITHM
|
|
- Tyrone Potter
- 6 years ago
- Views:
Transcription
1 THE COMPROMISE DECISION SUPPORT PROBLEM AND THE ADAPTIVE LINEAR PROGRAMMING ALGORITHM Farrokh Mistree 1*, Owen F. Hughes 2, Bert Bras 3. ABSTRACT In this chapter we present the Adaptive Linear Programming algorithm for solving a wide variety of practical multiobjective engineering design problems. This paper is an extension of the work on an optimization algorithm suitable for the design of large, highly constrained complex systems presented by Mistree, Hughes and Phuoc in Since then new contributions to this work and algorithm have been made in order to solve various multiobjective optimization problems. The compromise Decision Support Problem formulation and the Adaptive Linear Programming algorithm have evolved as a result. They effectively deal with multiobjective optimization problems involving bounds, linear and nonlinear constraints and goals, and consisting of boolean and continuous variables. Goal Programming and Sequential Linear Programming form the basis for the compromise Decision Support Problem and the Adaptive Linear Programming algorithm for multiobjective optimization respectively. Keywords: Multiobjective Optimization, Decision Support Problems, Mathematical Programming, Computer-based Design Synthesis. 1 Professor, Department of Mechanical Engineering, Systems Design Laboratory, University of Houston, Houston, Texas , USA. 2 Professor, Department of Aerospace and Ocean Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061, USA. 3 Research Associate, Systems Design Laboratory, University of Houston, Houston, Texas , USA. * To whom correspondence should be addressed. MISTREE, F., HUGHES, O. F. AND BRAS, B. A., "THE COMPROMISE DECISION SUPPORT PROBLEM AND THE ADAPTIVE LINEAR PROGRAMMING ALGORITHM" IN STRUCTURAL OPTIMIZATION : STATUS AND PROMISE, PAGES , (M. P. KAMAT, ED.), WASHINGTON, D.C.: AIAA, (1993).
2 The Compromise DSP and Adaptive Linear Programming 2 1 OUR FRAME OF REFERENCE A comprehensive approach called the Decision S upport Problem (DSP) Technique [1-3] is being developed and implemented, at the University of Houston, to provide support for human judgment in designing an artifact that can be manufactured and maintained. Decision Support Problems provide a means for modeling decisions encountered in design, manufacture and maintenance as defined above. Multiple objectives that are quantified using analysis-based "hard" and insight-based "soft" information can be modeled in the DSPs. For real-world, practical systems, not all of the information will be available for modeling systems comprehensively and correctly in the early stages of the project. Therefore, the solution to the problem, even if it is obtained using optimization techniques, cannot be the optimum with respect to the real world. However, this solution can be used to support a designer's quest for a superior solution. In a computer-assisted environment, this support is provided in the form of optimal solutions for Decision Support Problems. Formulation and solution of DSPs provide a means for making the following types of decisions: Selection - the indication of a preference, based on multiple attributes, for one among several feasible alternatives. Compromise - the improvement of a feasible alternative through modification. Hierarchical - decisions that involve interaction between sub-decisions. Conditional - decisions in which the risk and uncertainty of the outcome are taken into account. Each of these decisions can be modeled as a compromise DSP which, in the context of this chapter, is a multiobjective, nonlinear optimization problem. Although the work in nonlinear mathematical programming is extensive, the interest in developing the means for modeling and solving problems that involve multiple and conflicting objectives in general and for hierarchical problems in particular is relatively sparse. An excellent review article appears in this book [4]. In this chapter, we describe an algorithm that has been specifically developed and successfully implemented for solving selection, compromise, hierarchical and conditional DSPs. Applications of these DSPs include the design of ships, damage tolerant structural and mechanical systems, the design of aircraft, mechanisms, thermal energy systems, design using composite materials and data compression. A detailed set of references to these applications is presented in [5]. DSPs have been developed for hierarchical design; coupled selection-compromise, compromisecompromise and selection-selection. These constructs have been used to study interaction between design and manufacture [6] and between various events in the conceptual phase of the design process [7]. Our algorithm, is suitable for solving single and multiple objective optimization problems involving continuous variables and differentiable constraints. Its strength, however, lies is in the solution of different types of DSPs. We now give a brief history of the development of our algorithm and the development of the DSPs. In 1981 Mistree, Hughes and Phuoc [8] presented an algorithm for use in designing large, highly constrained complex systems. In that paper an algorithm called SLIP2 (Sequential Linear Programming 2nd Generation) and its use in MAESTRO (Method for Analysis Evaluation and STRuctural Optimization) was explained 1. Briefly it was mentioned that a stand-alone version of the SLIP2 algorithm had been developed and its capability of solving standard linear programming, nonlinear and a subset of goal programming problems demonstrated. Since then this stand-alone version has been 1 MAESTRO is a commercially successful ship structural optimization program that is currently being used by the US Navy and Coast Guard, the Royal Australian Navy, the Royal Dutch Navy, Lloyds of London and a number of other organizations around the world.
3 The Compromise DSP and Adaptive Linear Programming 3 significantly improved and is now a major component of the DSIDES (Decision Support In the Design of Engineering Systems) system [9]. The SLIP2 algorithm was extended to solve multilevel, hierarchical problems and was then called SLIPML (Sequential Linear Programming Multi-Level). Refinements to SLIPML have resulted in the Adaptive Linear Programming (ALP) algorithm which we describe in this paper. The ALP algorithm with its multilevel, multigoal feature is incorporated in DSIDES and without them in MAESTRO. In this chapter, we build on what was presented in [8]. The compromise DSP is derived from goal programming. It is a hybrid formulation in that it incorporates concepts from both traditional mathematical programming and goal programming, and makes use of some new ones. Therefore, in this chapter we first introduce goal programming, then the compromise DSP formulation followed by the Adaptive Linear Programming algorithm including some information about its computer implementation and applications. 2 AN INTRODUCTION TO GOAL PROGRAMMING One of the multiobjective mathematical programming techniques is goal programming, [10-12]. Goal programming itself is a development of the 1950's, but it has only been since the mid 1970's that goal programming has received substantial and widespread attention - but unfortunately not from engineers involved in the design of artifacts. The term "goal programming" is used by its developers [11] to indicate the search for an "optimal" program (i.e., a set of policies to be implemented) for a mathematical model that is composed solely of goals. This does not represent a limitation, on the contrary, any mathematical programming model (e.g., linear programming) may find an equivalent representation in goal programming. Further, not only does goal programming provide an alternative representation, it often provides a representation that is more effective in capturing the nature of real world problems. How this is achieved will be discussed in the following sections. 2.1 The Difference Between Objectives and Goals In goal programming a distinction is made between an objective and a goal: Objective: In mathematical programming, an objective is a function that we seek to optimize, via changes in the problem variables. The most common forms of objectives are those in which we seek to maximize or minimize. For example, Minimize Z = A( X ) Goal: It is an objective with a right hand side. This right hand side (G) is the target value or aspiration level associated with the goal. For example, A( X ) = G If we state that we wish to minimize the stress at a point in a beam, then that stress minimum represents an objective. If instead we state that we wish to achieve a particular value for the factor of safety at a point in the beam, say T = 2, then we have stated a goal. Goals in goal programming are classified as "rigid" (i.e., hard or inflexible) or "soft" (i.e., flexible). Hard goals must be satisfied whereas soft goals are to be achieved to the extent possible. The question which arises now is how to make effective use of goals and objectives in modeling real world problems.
4 The Compromise DSP and Adaptive Linear Programming Development of the Baseline Model for Multiobjective Optimization The baseline model is the initial, unified mathematical model of a problem. A more detailed description and examples of its use are given by Ignizio [10]. The advantage of the baseline model representation is that it is independent of the approach used to solve the problem. The general form of the baseline model is as follows: Find The vector of problem variables X SatisfyThe goals A t ( X ) = G t for all t {1} Maximize: A r ( X ) for all r {2} Minimize: A s ( X ) for all s {3} Note that in this model we have t goals (some rigid and some flexible), r maximizing objectives and s minimizing objectives. If some variables are nonnegative, then only for those variables we add the following restriction to the baseline model. Xi 0 The next step in the solution procedure is to convert the baseline model into a conventional single objective model or some particular multiobjective model that can be solved using an appropriate optimization algorithm. 2.3 The Conversion of the Baseline Model to a Goal Programming Model The baseline model is converted to a goal programming model that can be solved as follows: Step 1 : Transform all objectives (i.e., equations {2} and {3}) into goals by establishing associated aspiration levels based on the belief that a real world decision maker can usually cite (initial) estimates of his or her aspiration levels. Hence, maximize A r ( X ) becomes A r ( X ) G r for all r, and minimize A s ( X ) becomes A s ( X ) G s for all s. where G r and G s are the respective aspiration levels of the two objective types. Step 2 : Rank-order each goal according to its perceived importance. Hence, the set of hard goals (i.e., constraints in traditional math programming) is always assigned the top priority or rank. Step 3 : All the goals must be converted into equations through the addition of deviation variables As can be seen, the formulation is not restricted to a certain class of problems and thus very general. Deviation variables and their relation with the goals are the key to goal programming and therefore will be discussed in more detail.
5 The Compromise DSP and Adaptive Linear Programming Deviation Variables and Goals In a goal we can distinguish the aspiration level, G i, of the decision maker and the actual attainment, A i ( X ), of the goal. Three conditions need to be considered: 1. A i ( X ) G i ; we wish to achieve a value of A i ( X ) that is equal to or less than G i. 2. A i ( X ) G i ; we wish to achieve a value of A i ( X ) that is equal to or greater than G i. 3. A i ( X ) = G i ; we would like the value of A i ( X ) to equal G i. We will now introduce the concept of a deviation variable. Consider the third condition, namely, we would like the value of A i ( X ) to equal G i. The deviation variable is defined as: d = G i - A i ( X ). The deviation variable d can be negative or positive. In effect, a deviation variable represents the distance (deviation) between the aspiration level and the actual attainment of the goal. Considerable simplification of the solution algorithm is effected if one can assert that all the variables in the problem being solved are positive. Hence, the deviation variable d is replaced by two variables: d = d i - - d i + where d i -. d i + = 0 and d i -, d i + 0. The preceding ensures that the deviation variables never take on negative values. product constraint ensures that one of the deviation variables will always be zero. The system goal becomes: The A i ( X ) + d i - - d i + = G i ; i = 1,2,..., m {4} subject to d i -, d i + 0 and d i -. d i + = 0 {5} If the problem is solved using an algorithm that provides a vertex solution as a matter of course then the constraint is automatically satisfied making its inclusion in the formulation redundant. Since our algorithm, which will be discussed in the second part of this chapter, uses solution schemes which provide a vertex solution we will assume that this constraint is satisfied. For completeness we include this constraint in the mathematical forms of the compromise Decision Support Problem given later in this chapter and for brevity we will omit this constraint from all subsequent formulations. Note that a goal {4} is always expressed as an equality. When considering equation {4} the following will be true: if A i G i then d i - > 0 and d i + = 0. if A i G i then d i - = 0 and d i + > 0, and if A i = G i then d i - = 0 and d i + = 0
6 The Compromise DSP and Adaptive Linear Programming 6 How do we model the three conditions listed earlier using equation {4}? 1. To satisfy A i ( X ) G i, we must ensure that the positive deviation d i + is zero. The negative deviation d i - will measure how far the performance of the actual design is from the goal. 2. To satisfy A i ( X ) G i, the negative deviation d i - must be made equal to zero. In this case, the degree of overachievement is indicated by the positive deviation d i To satisfy A i ( X ) = G i, both deviations, d i - and di + must be zero. At this point we have established what we want to minimize. In the next section we introduce a means for minimization of the objective in goal programming. 2.5 The Lexicographic Minimum and the Achievement Function The objective of a traditional single objective optimization problem requires the maximization or minimization of an objective function. The objective is a function of the problem variables. In a goal programming formulation, each of the objectives is converted into a goal (equation {4}) with its corresponding deviation variables. The resulting formulation is similar to a single objective optimization problem but with the following differences: The objective is always to minimize a function. The objective function is expressed using deviation variables only. The objective in the goal programming formulation is called the achievement function. As indicated earlier the deviation variables are associated with goals and hence their range of values depend on the goal itself. Goals are not equally important to a decision maker. Hence to effect a solution, on the basis of preference, the goals may be rank-ordered into priority levels. As should be obvious from the preceding discussion we should seek a solution which minimizes all unwanted deviations. There are various methods of measuring the effectiveness of the minimization of these unwanted deviations. The lexicographic minimum concept is the most suitable approach in our opinion. The lexicographic minimum is defined as follows (see Ignizio [10,12]): LEXICOGRAPHIC MINIMUM Given an ordered array f = (f 1, f 2,..., f n ) of nonnegative elements f k 's, the solution given by f (1) is preferred to f (2) iff f k (1) < f k (2) and f i (1) < f i (2) for i = 1,...,k-1; that is all higher-order elements are equal. If no other solution is preferred to f, then f is the lexicographic minimum. As an example, consider two solutions, f (r) and f (s), where f (r) = (0, 10, 400, 56) f (s) = (0, 11, 12, 20)
7 The Compromise DSP and Adaptive Linear Programming 7 In this example, note that f (r) is preferred to f (s). The value 10 corresponding to f (r) is smaller that the value 11 corresponding to f (s). Once a preference is established then all higher order elements are assumed to be equivalent. Hence, the deviation function for the preemptive formulation is written as Z = [ f 1 (d i -,d i + ),..., f k (d i -,d i + ) ]. For a four goal problem, the deviation function may look like Z( d -, d + ) = [ (d d 2 - ), (d3 - ), (d 4 + ) ] In this case, three priority levels are considered. The deviation variables d 1 - and d 2 - have to be minimized preemptively before variable d 3 - is considered, and so on. These priorities represent rank; that is, the preference of one goal over another. The first priority level normally contains the rigid goals or, as in standard LP formulation, the constraints. The constraints are represented in goal programming by rigid goals. In goal programming the term "feasibility" is therefore not used. Instead a solution is implementable, i.e., the values of all deviation variables at the first priority level are zero, or non implementable, which means that the target values for one or more rigid goals have not been achieved. No conclusions can be drawn with respect to the amount by which one goal is preferred or is more important than another. This approach, using priority levels, is therefore suitable when there is little information available. For a simple problem with only two system variables, a graphical solution can easily be found by satisfying the goals in a logical manner. The numerical solution of a preemptive formulation requires the use of a special optimization algorithm developed to solve these types of problems. One such algorithm, the Multiplex, has been developed by Ignizio [13] and has been incorporated into DSIDES. goal programming and the baseline model provide means for modeling and solving multiobjective optimization problems. 3 THE COMPROMISE DECISION SUPPORT PROBLEM 3.1 The Characteristics of Engineering Design Problems The next logical step to obtain a means for modeling engineering problems would be to convert the baseline model into a goal programming model for engineering applications. This goal programming model could then serve as our Decision Support Problem. The conversion most certainly can be done but at a price that we believe should not be paid. We believe that for engineering design applications the generality of the goal programming model is not required for the following reasons: Observation: Engineering design problems always (or almost always) include rigid constraints. These, in the main, model the "physics of the problem". If these are violated the result would not be valid. It would not be correct for a designer/engineer to consider a physical limitation as an aspiration level. Analysis: In a goal programming model, at the time of solution, there can be no constraints, only goals. Hence, in order to be able to solve a goal programming model we are required to convert all constraints into goals. The Cost: This conversion requires the addition of two deviation variables for every constraint that is converted to a goal and these additional variables result in an increase
8 The Compromise DSP and Adaptive Linear Programming 8 in the size of the problem. However, the major cost lies in the fact that the model does not represent a real world engineering problem anymore, since the physical limitations are "disguised" as goals. Our Position: Since constraints are a fact of life in engineering, we prefer to treat them separately from goals when modeling engineering problems. Observation: Bounds on the system variables are always present in practical design problems. Once again, these bounds arise from the physical characteristics of the design problems (i.e., limitations on the performance of engineering materials). Variables in all practical design problems are bounded at both the lower and the upper end. Analysis: In goal programming the bounds are generally not of the same importance as in engineering problems. Variables are mostly not bounded at the upper end. Some algorithms require that each bound is converted into a goal. The Cost: The model of an engineering problem is not clear and uniform without bounds. A conversion of bounds into goals involves the addition of four deviation variables per system variable resulting in a relatively large increase in the size of the problem to be solved. Our Position: Since bounds are a fact of life in engineering problems we prefer to treat them explicitly and differently to goals. We therefore conclude that, although most of the characteristics of goal programming are desirable in modeling and solving engineering design problems, its generality reduces its effectiveness for use in engineering; hence the compromise Decision Support Problem. We have the following correspondences between terms used in goal programming and those used in the compromise DSP: GOAL PROGRAMMING COMPROMISE DSP Vector of problem variables Rigid or hard goal Flexible or soft goal Achievement function Vector of system variables System constraint System goal Deviation function In engineering applications, we prefer the term deviation function instead of achievement function. We consider this term to be more appropriate, since the function provides us with a measure of the deviation from the goals. 3.2 The Compromise DSP A compromise DSP is a hybrid formulation in that it incorporates concepts from both traditional mathematical programming and goal programming, and makes use of some new ones. It is similar to goal programming in that the multiple objectives are formulated as system goals (involving both system and deviation variables) and the deviation function is solely a function of the goal deviation variables. This is in contrast to traditional mathematical programming where multiple objectives are modeled as a weighted function of the system variables only. The concept of system constraints, however, is retained from the traditional constrained optimization formulation. Special emphasis is placed on the bounds on the system variables unlike in traditional mathematical programming and goal programming. In effect the traditional formulation is a subset of the compromise DSP - an
9 The Compromise DSP and Adaptive Linear Programming 9 indication of the generality of the compromise formulation. The compromise DSP is stated in words as follows: Given An alternative that is to be improved through modification. Assumptions used to model the domain of interest. The system parameters. All other relevant information. n number of system variables p+q number of system constraints p equality constraints q inequality constraints m number of system goals g i ( X ) system constraint function g i ( X ) = C i ( X ) - D i ( X ) f k (d i ) function of deviation variables to be minimized at priority level k for the preemptive case W i weight for the Archimedean case Find The values of the independent system variables (they describe the physical attributes of an artifact). X j j = 1,..., n The values of the deviation variables (they indicate the extent to which the goals are achieved). d i -, d i + i = 1,..., m Satisfy The system constraints that must be satisfied for the solution to be feasible. There is no restriction placed on linearity or convexity. g i ( X ) = 0; i = 1,..., p g i ( X ) 0; i = p+1,...,p+q The system goals that must achieve a specified target value as far as possible. There is no restriction placed on linearity or convexity. A i ( X ) + d i - - d i + = G i ; i = 1,..., m The lower and upper bounds on the system. X j min X j X j max ; j = 1,..., n d i -, d i 0 and d i -. d i + = 0 Minimize The deviation function which is a measure of the deviation of the system performance from that implied by the set of goals and their associated priority levels or relative weights: Case a: Preemptive (lexicographic minimum) Z = [ f 1 ( d i -, d i + ),..,f k ( d i -, d i + ) ] Case b: Archimedean m Z = Σ W i ( d i - + d i + ) ; Σ W i = 1; W i 0 i=1 The system descriptors for the compromise DSP are shown in italics in the word formulation; we now discuss these in somewhat more detail.
10 The Compromise DSP and Adaptive Linear Programming 10 A graphical representation of a two variable compromise DSP is shown in Figure 1. The difference between a system variable and a deviation variable is that the former represents a distance in the i th dimension from the origin of the design space, whereas the latter has as its origin the surface of the system goal. The value of the i th deviation variable is determined by the degree to which the i th goal is achieved. It depends upon the value of A i ( X ) alone (since G i is fixed by the designer) which in turn is dependent upon the system variables X. The set of deviation variables can be all continuous, all boolean or some can be boolean and others continuous. Obviously, both the deviation variables associated with a particular system goal will be of the same type. FIGURE 1 -- THE COMPROMISE DSP 3.3 System Descriptors of the Compromise DSP Compromise DSPs have a minimum of two system variables:
11 The Compromise DSP and Adaptive Linear Programming 11 X = (X 1, X 2,...,X n ). In general, a set of 'n' design variables is represented by X. The vector of variables include continuous variables and also boolean (1 if TRUE, 0 if FALSE) variables. System variables are, by their nature, independent of the other descriptors and can be changed as required by a designer to alter the state of the system. System variables that define the physical attributes (for example, dimensions, mass, etc.) of an artifact are always nonzero and positive. A system constraint models the relationships between the demands placed on the system D( X ) to the capabilities of the system C( X ) to meet the demand. C i ( X ) = D i ( X ); i = 1, 2,..., m. A system constraint may also model limiting values, for example, L i ( X ) = R i ( X ); i = 1, 2,..., m. In all cases, the set of system constraints must be satisfied for the feasibility of the design. Mathematically, system constraints are functions of system variables only. They are rigid and no violations are allowed. The set of system constraints may be all linear, nonlinear or consist of both linear and nonlinear functions. In engineering problems the system constraints are usually inequalities. However, occasions requiring equality constraints may arise (equality functions can also be part of the set of system constraints). The region of feasibility defined by the system constraints is called the feasible design space. In the compromise DSP the goals are identical to those of the goal programming formulation (see Section 2.4). A set of system goals is used to model the aspiration a designer has for the design. It relates the goal (aspiration level), G i, of the designer to the actual attainment, A i ( X ), of the goal (see equation {4}). The deviation variables too are defined in the same manner as in goal programming. Equation {5} holds true for the compromise DSP also. Range of values for deviation variables: The objective in the compromise DSP formulation is called the deviation function. As in the case of goal programming the objective is always to minimize a function that is expressed using deviation variables only. As indicated earlier, the deviation variables are associated with system goals and their range of values depends on the goal itself. Goals are not equally important to a designer. Hence, to effect a solution on the basis of a designer's preference, the goals are rank-ordered into priority levels. Within a priority level it is imperative that the deviation variables are of the same order of magnitude. This is achieved by normalizing the goals. If this is not done the deviation variable with the larger numerical value will dominate the solution process without regard to the designer-established preference for the set of goals. A solution to the order of magnitude problem is to normalize the achievement A i ( X ) with respect to the target value G i before the deviation variables are introduced. The following rules are used to formulate the system goals in a way that ensures that all the deviation variables will range within the same values (0 and 1 in this case).
12 The Compromise DSP and Adaptive Linear Programming 12 a. To maximize the achievement, A i ( X ), choose a target value G i greater or equal to the maximum expected value of A i ( X ), so that the ratio A i ( X )/G i is always less or equal than 1. For example, if A i ( X ) is the reference stress then G i could be the yield stress. Consider the following: A i ( X ) G i ==> A i ( X )/G i 1 Transform the expression into a system goal by adding and subtracting the corresponding deviation variables (which in this case will range between zero and one). A i ( X ) / G i + d i - - di + = 1 {6} In this case, the deviation variable d i + will always be zero, as indicated earlier. We then minimize the underachievement deviation d i -, to ensure that the performance of the design will be as close as possible to the desired goal. b. The following steps are required to minimize A i ( X ): i) Choose a target value G i less than or equal to the minimum expected value of A i ( X ). In this case, the ratio G i / A i ( X ) will be less than or equal to one. A i ( X ) G i ==> G i / A i ( X ) 1 Transform the expression into a system goal (note the inversion of G and A). The deviation variables will then vary between 0 and 1. G i / A i ( X ) + d i - - di + = 1 {7} The deviation variable d i + will be zero as indicated earlier. Minimizing the underachievement deviation d i - will ensure that the performance of the design is as close as possible to the desired goal. ii) If the target value G i is taken as zero, get an estimate of the maximum value that the achievement A i ( X ) can obtain within the bounds set for the system variables, A imax ( X ). Then formulate the following system goal: A i ( X ) / A imax ( X ) + d i - - d i + = 0 {8} The deviation variables will now vary between 0 and 1. In this case, the underachievement deviation d i - will always be zero. Minimize then the overachievement deviation d i + to ensure that the performance of the design will be as close as possible to the desired value of zero.
13 The Compromise DSP and Adaptive Linear Programming 13 c. If it is desired that A i ( X ) = G i, and i) if the target value G i is approached from below by A i ( X ), use {6} and minimize the sum (d i - + di + ); ii) if the target value G i is approached from above by A i ( X ), use {7} and minimize the sum (d i - + di + ); iii) if the target value G i is equal to zero, use {8} and minimize the sum (d i - + d i + ). Bounds are specific limits placed on the magnitude of each of the system and deviation variables. Each variable has associated with it a lower and an upper bound. Bounds are important for modeling real-world problems because they provide a means to include the experience-based judgment of a designer in the mathematical formulation. Unfortunately, in most engineering design text-books that encourage the notion of using optimization techniques in design, there has been a tendency to ignore bounds. Bounds on the system variables take the form L X U where L and U represent the set of lower and upper bounds respectively. The bounds on the system variables demarcate the region in which a search is to be made for a feasible solution. In engineering design, the lower bounds are always nonzero and positive, reflecting physical limitations 2. Deviation variables are by definition nonnegative and hence a lower bound of zero is always assigned to them. Upper bounds are not required only if the system goals are normalized as has already been described. We will assume that the system goals will always be normalized and therefore the upper bounds on the deviation variables are not be included in the formulations. The deviation function: The deviation function corresponds to the achievement function of goal programming. In the compromise DSP formulation the aim is to minimize the difference between that which is desired and that which can be achieved. This is done by minimizing the deviation function, Z( d, d + ), which is always written in terms of the deviation variables. A designer sets an aspiration level for each of the goals. It may be impossible to obtain a design that satisfies all of the levels of aspiration. Hence, a compromise solution has to be accepted by the designer. It is desirable, however, to obtain a design whose performance matches the aspiration levels as closely as possible. This, in essence is the objective of a compromise solution. The difference between the goals and achievement is expressed by a combination of appropriate deviation variables, Z( d -, d + ). This deviation function provides an indication of the extent to which specific goals are achieved. 2 Aside: How about stress? Compression (-) and tension (+). The signs are used as indicators. The magnitude of the stress is always positive. The lower bound on stress? For it to be meaningful in design the lower bound is almost always nonzero.
14 The Compromise DSP and Adaptive Linear Programming 14 All goals may not be equally important to a designer and the formulations are classified as Archimedean or preemptive - based on the manner in which importance is assigned to satisficing 3 the goals. The most general form of the deviation function for 'm' goals in the Archimedean formulation is Z( d -, d + ) = (W i - di - + W i + di + ) i=1, m where the weights W 1, W 2,..., W m reflect the level of desire to achieve each of the goals. In this formulation, the weights W i satisfy the following conditions: m W i = 1 and W i 0 for all i. i=1 It may be difficult to come up with truly credible weights. A systematic approach for determining reasonable weights is to use the schemes presented in [15,16]. In the preemptive approach, the difficulty of finding the weights is circumvented by rank ordering the goals. This is probably easier in an industrial environment and in the earlier stages of design. The measure of achievement is obtained in terms of the lexicographic minimization of an ordered set of goal deviations; within each set of goals at a particular rank, weights may be used. Goals are ranked lexicographically and an attempt is made to achieve a more important goal before other goals are considered (see Section 2.5) The deviation variables to be included in the deviation function are summarized in Table 1. Desire Set Goals Minimize Set Max. A i High G i Nonnormalized d i - d i + = 0 Normalized as per Section 3.3.a d i - d i + = 0 Min. A i Low G i Nonnormalized d i + d i - = 0 Normalized as per Section 3.3.b(i) d i - d i + = 0 Normalized as per Section 3.3.b(ii) d i + d i - = 0 A i = G i Normalization scheme may vary d i - + di + TABLE 1 -- SYSTEM GOAL FORMULATIONS 3.4 The Relationship between the Compromise DSP, Goal Programming and Mathematical Programming We consider the preceding formulation of a compromise DSP to be a hybrid formulation in that it incorporates concepts from both traditional mathematical programming and goal programming. Furthermore it makes use of some new ones. What distinguishes the compromise DSP formulation from goal programming is the fact that it is tailored to handle common engineering design situations in which physical limitations manifest themselves as system constraints (mostly inequalities) and bounds. It is similar to goal programming in 3 Satisficing - not the best but good enough. Use of this term in the context of optimization is attributed to Herbert Simon [14].
15 The Compromise DSP and Adaptive Linear Programming 15 that the multiple objectives are formulated as system goals (involving both system and deviation variables) and the deviation function is solely a function of the goal deviation variables. These constraints and bounds are handled separately from the system goals, contrary to the goal programming formulation in which only goals are used. The terms compromise Decision Support Problem and Mathematical Programming (for example [17,18]) are synonymous to the extent that they refer to system constraints that must be satisfied for feasibility. They differ in the way the goodness of the solution is modeled and evaluated. In the compromise DSP the goodness is modeled by the system goals (which are a function of both the system and the deviation variables) and a measure of the goodness is provided by the deviation function. The deviation function is modeled using deviation variables only. This is in contrast to traditional mathematical programming where multiple objectives are modeled as a weighted function of the system variables only. In the compromise DSP special emphasis is placed on the system variable bounds, unlike in traditional mathematical programming and goal programming. In effect, the compromise DSP then is a hybrid formulation. The traditional mathematical programming formulation is a subset of the compromise DSP (an indication of the generality of the compromise formulation) and the compromise DSP is a subset of goal programming. Does the solution to a compromise DSP belong to the Pareto set of the original multiobjective problem? Our intention and/or desire is to obtain a satisficing solution, not an optimal one. No attempt is made to find a design that absolutely minimizes a vector of objective functions; that is, the problem Find x * s.t. x * = arg min {f 1 (x), f 2 (x), f 3,..., f m (X)} x X c R n has no parallel in our approach. Our intention in solving the compromise DSP is to satisfice a set of goals. In our formulation the satisficing of goals solves the mathematical problem at hand; optimizing the numerical value of a goal function is not an issue. The goals G i in equation {4} are to be selected judiciously by a designer. For the solution X * to the compromise DSP to be a Pareto solution, conditions such as Lemma 1.6 in reference [19] must be satisfied for our choice of goals G i. We do not consider this lack of Pareto optimality a drawback. Afterall, the decision to select a practical design from a Pareto set is in a way preempted by the selection of the goals G i by a designer. In the preceding, we have presented a means for modeling real world problems and focussed on the compromise Decision Support Problem. In the remaining part of this paper we present a comprehensive optimization algorithm, called Adaptive Linear Programming (ALP) that can be used to solve a wide variety of complex practical design problems. 4 ADAPTIVE LINEAR PROGRAMMING FOR SOLVING COMPROMISE DECISION SUPPORT PROBLEMS 4.1 Background of the Algorithm
16 The Compromise DSP and Adaptive Linear Programming 16 Solutions to the compromise DSP templates can be found using different optimization methods. The choice of the optimization method depends, to a certain extent, on the problem. Solution algorithms fall into two categories, namely, those that solve the exact problem approximately, and those that solve an approximation of the problem exactly. Gradient-based methods, pattern search methods, and penalty function methods fall into the first category whereas methods involving sequential linearization fall into the second category. We chose the sequential linear programming approach in 1981 because it had, in our opinion, the highest potential for being used to develop a single algorithm for solving a range of DSPs in engineering design. More recently, Azarm et al. [20] report that this is one of the most widely used approaches. We believe three important features contribute to the success of the ALP algorithm, namely, the use of second-order terms in linearization, the normalization of the constraints and goals and their transformation into generally well-behaved convex functions in the region of interest, an intelligent constraint suppression and accumulation scheme. These features are described in detail in [8] and briefly described in the following paragraphs. The first and second order algorithms need the derivatives (with respect to the design variables) of the constraints and goals in INITIAL DESIGN addition to the values of ANALYSIS CYCLE SYNTHESIS CYCLE ADAPTATION No MODIFY NONLINEAR COMPROMISE DSP EVALUATE DESIGN CONSTRAINTS AND GOALS FORMULATE LINEARIZED DSP SOLVE LINEARIZED DSP REFORMULATE LINEARIZED DSP CONVERGED? DATA FILE EVALUATION ROUTINES ANALYSIS PROGRAMS COMPROMISE DSP TEMPLATE Revised Dual Simplex algorithm or Multiplex algorithm these quantities. The SLIPML and ALP algorithms are modified second order algorithms (the diagonal second order terms used are exact and the off-diagonal terms are approximated using parabolas). This is one of the principal deviations from other SLP algorithms that were developed based on the well-known work of Stewart and Griffith [21]. This is the first principal feature of the algorithm. No Yes CONVERGED? Yes STOP FIGURE 2 -- IMPLEMENTATION OF THE ALP ALGORITHM FOR SOLVING COMPROMISE DSPs The derivatives are determined numerically using the central difference formula. After solving the linear problem, this solution can be used to improve the second order approximation using the
17 The Compromise DSP and Adaptive Linear Programming 17 ALP algorithm. A block diagram of the implementation of the ALP algorithm is shown in Figure 2. A user specifies the input to the software implementation of the algorithm in the form of a DSP template. This template consists of data and user provided Fortran routines. The data is used to define the problem size, the names of the variables and constraints, the bounds on the variables, the linear constraints, and the convergence criteria. The Fortran routines are used to evaluate the nonlinear constraints and goals, to input data required for the constraint evaluation routines and the design-analysis routines, and to output results in a format desired by the user. Access is provided to a design-analysis program library from the analysis/synthesis cycle and also within the synthesis cycle. In the design of major systems it is desirable to use the design-analysis interface associated with the analysis/synthesis cycles (e.g., structural design requiring the use of a finite element program). It has been found necessary to use both the interfaces for solving large, analysis-intensive problems [22, Table 6]. Once the nonlinear compromise DSP is formulated, it is approximated by linearization. At each stage the solution of the linear programming problem is obtained by a Revised Dual Simplex or a Multiplex algorithm [13]. The choice among these algorithms depends on the form of the deviation function. The deviation function that is given in the mathematical form of the template can be implemented in two ways: 1. In the Preemptive form as a lexicographic minimum of the goal deviation variables. In this case we use the Multiplex algorithm. 2. In an Archimedean form as a weighted function of the goal deviation variables. This reduces the formulation of the template to a traditional single objective optimization one and we use the Revised Dual Simplex or the Multiplex algorithm. The choice of the formulation depends on the maturity of the template. In the solution process, checks for determining whether or not to continue the solution process are made. When the Preemptive form of the deviation function is used, the check is done for the stationarity of the system variables from one synthesis iteration to the next. Once a solution has been obtained, a post-solution analysis can be performed. For linear DSPs the scheme described in [23] is used. Unlike the problems with sequential linearization algorithms pointed out in [24], the ALP algorithm has no problems in dealing with nonconvex constraints which invariably occur in the real-world engineering design. The function g i ( X ) is normalized and is therefore nondimensional. This simplifies the task of solving a compromise DSP with constraints expressed in different physical units. If C i ( X ) and D i ( X ) represent the capability and demand placed on a system in mode i, then, the system constraint is C i ( X ) D i ( X ) or C i ( X ) - D i ( X ) 0 in the normalized, dimensionless form the preceding equation becomes and hence (C i ( X ) - D i ( X ))/( C i ( X ) + D i ( X )) 0 g i ( X ) = (C i ( X ) - D i ( X ))/( C i ( X ) + D i ( X )). If r i ( X ) = C i ( X )/D i ( X ) for a system constraint, and
18 The Compromise DSP and Adaptive Linear Programming 18 then, = A i ( X )/G i for a system goal g i ( X ) = (r i ( X ) - 1)/( r i ( X ) + 1). In a compromise DSP, a nonlinear system constraint is represented as (r i ( X ) - 1)/( r i ( X ) + 1) 0 {9a} and a nonlinear system goal as (r i ( X ) - 1)/( r i ( X ) + 1) + d - - d + = T i {9b} where T i is the target value to be achieved (see equation {7}). The preceding is the second important feature of the algorithm. An additional advantage of using the algorithm is the ability to obtain valuable sensitivity information. The latter is particularly important for establishing the validity of DSPs that make use of both hard and soft information and in exploring the vicinity of the solution point. This sensitivity information, however, is only valid for the linear problem. For nonlinear problems and for those with both linear and nonlinear constraints the sensitivity information is only valid for the final design and only for small changes in the variables. 4.2 Approximation of a Nonlinear Compromise Decision Support Problem In the ALP algorithm, the system constraints are modeled as shown in equation {9a}. In this subsection we introduce our approach to linearizing nonlinear system constraints of the form g( X ) 0 unless otherwise stated. The process for the approximation of system goals (see equation {9b}) is identical, but it involves some extra steps because of the deviation variables. First the deviation variables are temporarily removed from the nonlinear system goal formulation and thus changing it into an constraint. Next the goal is approximated like a constraint. Finally the deviation variables are added to the approximation and thus converting the approximation into a goal again. The concept of the approximation remains the same and therefore in the following we describe our approach for system constraints only. The traditional Stewart and Griffith method uses the first-order Taylor series expansion for the constraint function g( X ) about a point X which is given by g(x) = g(x 0 ) + (X 1 - X 1 0 ) g X (X 2 - X 2 0 ) g X (X n - X n 0 ) g X n 0 {10} The disadvantage of using a first-order approximation is that if all the derivatives ( g/ x) o are small or if the constraint function g( X ) has a high degree of curvature, then the tangent plane is a poor representation of g( X ) and the resulting linearized form of g( X ) (the line labelled "1 st Order" in Figure 3) is far removed from the actual g( X ). Better results can be obtained by retaining the second-order terms of the Taylor series expansion, i.e.,
19 The Compromise DSP and Adaptive Linear Programming 19 g(x) = g(x 0 ) + n j=1 (X j - X j 0 ) g X j n j=1 n k=1 (X j - X j 0 )(X k - X k 0 ) 2 g X j X k 0 {11} Surface : g(x) = Constant A Parabola Linearized Constraint 2nd Order 1st Order FIGURE 3 -- FIRST AND SECOND-ORDER CONSTRAINT LINEARIZATION [8] In equation {11}, for n variables, n first-order and n x n second-order derivatives need to be evaluated. This requires a lot of computation, particularly for large, complex engineering systems. It has been found quite adequate to retain only the diagonal secondorder terms [8] since these are usually larger than the mixed second order derivatives 4. Equation {11} is now reduced to g(x) = g(x 0 ) + n j=1 (X j - X j 0 ) g X j n j=1 (X j - X j 0 ) 2 2 g X j 2 0 {12} In this case, for n variables, n first-order and only n second-order derivatives need to be evaluated. This quadratic representation of g( X ) uses a parabola (the dashed lines in Figure 3) to find points B * and C *, which are approximations to the true "zero intercept" points B and C, that is, the points of interception with the g( X ) = 0 plane. This approximation means that the linearized constraint passes through the points B * and C * instead of the true (but unknown) points B and C. For each design variable the symbol X j * denotes the value of X j at this approximate "zero intercept" point. Likewise the plane (or "hyperplane" when there are more than two design variables) AB * C * is being used to represent the function g( X ). In each direction, the gradient of this hyperplane is the slope of the secant line (AB * in Figure 3). The gradient, therefore, is 4 This observation is offered as a heuristic.
20 The Compromise DSP and Adaptive Linear Programming 20 g * -g(x 0 ) X j 0 (X * j - X 0 j ) {13} From equation {12}, for the j th variable direction, the quadratic to be solved to determine (X j - X j o ) is g(x 0 ) + (X j - X j 0 ) g X j (X j - X j 0 ) 2 2 g X j 2 0 = 0 {14} When the quadratic has real roots, the correct root is the one which gives the intercept X j * closest to X j o. In case the constraint/goal is normalized as in equations {9a} and {9b} (which gives the constraint/goal a convex shape) then the positive sign will always be selected in the quadratic formula. This is a key simplification. However, for completeness a test is made in the algorithm to find the correct root. (X j * - X j 0 ) = - g X j 0 ± g X j g(x 0 ) 2 g 2 g X j 2 0 From equations {13} and {15}, the gradient is derived as X j 2 0 {15} g X j 0 * = - g X j 0 + -g(x 0 ) 2 g X j 2 0 g X j g(x 0 ) 2 g X j 2 0 {16a} or g X j 0 * = - g X j 0 - -g(x 0 ) 2 g X j 2 0 g X j g(x 0 ) 2 g X j 2 0 {16b} dependent on which equation ({16a} or {16b}) gives the smallest absolute value for the gradient, and therefore the best approximation (i.e., the smallest absolute value for X j * -
9. Decision-making in Complex Engineering Design. School of Mechanical Engineering Associate Professor Choi, Hae-Jin
9. Decision-making in Complex Engineering Design School of Mechanical Engineering Associate Professor Choi, Hae-Jin Overview of Lectures Week 1: Decision Theory in Engineering Needs for decision-making
More informationCHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear Programming CHAPTER 11 253 CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming TRUE/FALSE 11.1 If conditions require that all
More informationIV. Violations of Linear Programming Assumptions
IV. Violations of Linear Programming Assumptions Some types of Mathematical Programming problems violate at least one condition of strict Linearity - Deterministic Nature - Additivity - Direct Proportionality
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationTrade Of Analysis For Helical Gear Reduction Units
Trade Of Analysis For Helical Gear Reduction Units V.Vara Prasad1, G.Satish2, K.Ashok Kumar3 Assistant Professor, Mechanical Engineering Department, Shri Vishnu Engineering College For Women, Andhra Pradesh,
More informationA Comparative Study of Different Order Relations of Intervals
A Comparative Study of Different Order Relations of Intervals Samiran Karmakar Department of Business Mathematics and Statistics, St. Xavier s College, Kolkata, India skmath.rnc@gmail.com A. K. Bhunia
More informationCHAPTER 4 VARIABILITY ANALYSES. Chapter 3 introduced the mode, median, and mean as tools for summarizing the
CHAPTER 4 VARIABILITY ANALYSES Chapter 3 introduced the mode, median, and mean as tools for summarizing the information provided in an distribution of data. Measures of central tendency are often useful
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationConstraints. Sirisha. Sep. 6-8, 2006.
Towards a Better Understanding of Equality in Robust Design Optimization Constraints Sirisha Rangavajhala Achille Messac Corresponding Author Achille Messac, PhD Distinguished Professor and Department
More information36106 Managerial Decision Modeling Linear Decision Models: Part II
1 36106 Managerial Decision Modeling Linear Decision Models: Part II Kipp Martin University of Chicago Booth School of Business January 20, 2014 Reading and Excel Files Reading (Powell and Baker): Sections
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationLiang Li, PhD. MD Anderson
Liang Li, PhD Biostatistics @ MD Anderson Behavioral Science Workshop, October 13, 2014 The Multiphase Optimization Strategy (MOST) An increasingly popular research strategy to develop behavioral interventions
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More information3E4: Modelling Choice
3E4: Modelling Choice Lecture 6 Goal Programming Multiple Objective Optimisation Portfolio Optimisation Announcements Supervision 2 To be held by the end of next week Present your solutions to all Lecture
More information1 Simplex and Matrices
1 Simplex and Matrices We will begin with a review of matrix multiplication. A matrix is simply an array of numbers. If a given array has m rows and n columns, then it is called an m n (or m-by-n) matrix.
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationInference of A Minimum Size Boolean Function by Using A New Efficient Branch-and-Bound Approach From Examples
Published in: Journal of Global Optimization, 5, pp. 69-9, 199. Inference of A Minimum Size Boolean Function by Using A New Efficient Branch-and-Bound Approach From Examples Evangelos Triantaphyllou Assistant
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationChapter 1. Gaining Knowledge with Design of Experiments
Chapter 1 Gaining Knowledge with Design of Experiments 1.1 Introduction 2 1.2 The Process of Knowledge Acquisition 2 1.2.1 Choosing the Experimental Method 5 1.2.2 Analyzing the Results 5 1.2.3 Progressively
More informationis called an integer programming (IP) problem. model is called a mixed integer programming (MIP)
INTEGER PROGRAMMING Integer Programming g In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is
More informationcha1873x_p02.qxd 3/21/05 1:01 PM Page 104 PART TWO
cha1873x_p02.qxd 3/21/05 1:01 PM Page 104 PART TWO ROOTS OF EQUATIONS PT2.1 MOTIVATION Years ago, you learned to use the quadratic formula x = b ± b 2 4ac 2a to solve f(x) = ax 2 + bx + c = 0 (PT2.1) (PT2.2)
More informationNear-Potential Games: Geometry and Dynamics
Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics
More informationPattern Classification, and Quadratic Problems
Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationElementary Linear Algebra, Second Edition, by Spence, Insel, and Friedberg. ISBN Pearson Education, Inc., Upper Saddle River, NJ.
2008 Pearson Education, Inc., Upper Saddle River, NJ. All rights reserved. APPENDIX: Mathematical Proof There are many mathematical statements whose truth is not obvious. For example, the French mathematician
More informationOPRE 6201 : 3. Special Cases
OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationNew Reference-Neighbourhood Scalarization Problem for Multiobjective Integer Programming
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 3 No Sofia 3 Print ISSN: 3-97; Online ISSN: 34-48 DOI:.478/cait-3- New Reference-Neighbourhood Scalariation Problem for Multiobjective
More informationBasics of Uncertainty Analysis
Basics of Uncertainty Analysis Chapter Six Basics of Uncertainty Analysis 6.1 Introduction As shown in Fig. 6.1, analysis models are used to predict the performances or behaviors of a product under design.
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More informationWriting Patent Specifications
Writing Patent Specifications Japan Patent Office Asia-Pacific Industrial Property Center, JIPII 2013 Collaborator: Shoji HADATE, Patent Attorney, Intellectual Property Office NEXPAT CONTENTS Page 1. Patent
More informationMULTIOBJECTIVE OPTIMIZATION CONSIDERING ECONOMICS AND ENVIRONMENTAL IMPACT
MULTIOBJECTIVE OPTIMIZATION CONSIDERING ECONOMICS AND ENVIRONMENTAL IMPACT Young-il Lim, Pascal Floquet, Xavier Joulia* Laboratoire de Génie Chimique (LGC, UMR-CNRS 5503) INPT-ENSIGC, 8 chemin de la loge,
More informationThe Simplex Method: An Example
The Simplex Method: An Example Our first step is to introduce one more new variable, which we denote by z. The variable z is define to be equal to 4x 1 +3x 2. Doing this will allow us to have a unified
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table
More informationBi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure
Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Kalyanmoy Deb 1, Ralph E. Steuer 2, Rajat Tewari 3, and Rahul Tewari 4 1 Department of Mechanical Engineering, Indian Institute
More informationMaterials Selection and Design Materials Selection - Practice
Materials Selection and Design Materials Selection - Practice Each material is characterized by a set of attributes that include its mechanical, thermal, electrical, optical, and chemical properties; its
More informationVector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.
Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More informationDeep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.
Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning
More informationIntroduction to Operations Research. Linear Programming
Introduction to Operations Research Linear Programming Solving Optimization Problems Linear Problems Non-Linear Problems Combinatorial Problems Linear Problems Special form of mathematical programming
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationg(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to
1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the
More informationA Brief Introduction to Multiobjective Optimization Techniques
Università di Catania Dipartimento di Ingegneria Informatica e delle Telecomunicazioni A Brief Introduction to Multiobjective Optimization Techniques Maurizio Palesi Maurizio Palesi [mpalesi@diit.unict.it]
More informationMultiobjective optimization methods
Multiobjective optimization methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi spring 2014 TIES483 Nonlinear optimization No-preference methods DM not available (e.g. online optimization)
More informationProving Completeness for Nested Sequent Calculi 1
Proving Completeness for Nested Sequent Calculi 1 Melvin Fitting abstract. Proving the completeness of classical propositional logic by using maximal consistent sets is perhaps the most common method there
More informationCHAPTER-3 MULTI-OBJECTIVE SUPPLY CHAIN NETWORK PROBLEM
CHAPTER-3 MULTI-OBJECTIVE SUPPLY CHAIN NETWORK PROBLEM 3.1 Introduction A supply chain consists of parties involved, directly or indirectly, in fulfilling customer s request. The supply chain includes
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationStructured Problems and Algorithms
Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become
More informationIntroduction to Operations Research
Introduction to Operations Research Linear Programming Solving Optimization Problems Linear Problems Non-Linear Problems Combinatorial Problems Linear Problems Special form of mathematical programming
More informationDesign and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016
Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationNP Completeness and Approximation Algorithms
Chapter 10 NP Completeness and Approximation Algorithms Let C() be a class of problems defined by some property. We are interested in characterizing the hardest problems in the class, so that if we can
More informationWłodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.
Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationApplications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang
Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of
More informationσ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =
Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,
More informationGENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION
Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124
More informationVolume Author/Editor: Gregory K. Ingram, John F. Kain, and J. Royce Ginn. Volume URL:
This PDF is a selection from an out-of-print volume from the National Bureau of Economic Research Volume Title: The Detroit Prototype of the NBER Urban Simulation Model Volume Author/Editor: Gregory K.
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationMath-2A Lesson 13-3 (Analyzing Functions, Systems of Equations and Inequalities) Which functions are symmetric about the y-axis?
Math-A Lesson 13-3 (Analyzing Functions, Systems of Equations and Inequalities) Which functions are symmetric about the y-axis? f ( x) x x x x x x 3 3 ( x) x We call functions that are symmetric about
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationA Robust Controller for Scalar Autonomous Optimal Control Problems
A Robust Controller for Scalar Autonomous Optimal Control Problems S. H. Lam 1 Department of Mechanical and Aerospace Engineering Princeton University, Princeton, NJ 08544 lam@princeton.edu Abstract Is
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationHybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].
Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationChapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers
Fry Texas A&M University! Fall 2016! Math 150 Notes! Section 1A! Page 1 Chapter 1A -- Real Numbers Math Symbols: iff or Example: Let A = {2, 4, 6, 8, 10, 12, 14, 16,...} and let B = {3, 6, 9, 12, 15, 18,
More informationDiscriminative Direction for Kernel Classifiers
Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering
More informationStochastic Optimization Methods
Stochastic Optimization Methods Kurt Marti Stochastic Optimization Methods With 14 Figures 4y Springer Univ. Professor Dr. sc. math. Kurt Marti Federal Armed Forces University Munich Aero-Space Engineering
More informationSupport Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification
More informationApproximation Metrics for Discrete and Continuous Systems
University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University
More informationSimplex Method for LP (II)
Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:
More informationMultiple Criteria Optimization: Some Introductory Topics
Multiple Criteria Optimization: Some Introductory Topics Ralph E. Steuer Department of Banking & Finance University of Georgia Athens, Georgia 30602-6253 USA Finland 2010 1 rsteuer@uga.edu Finland 2010
More informationTowards a General Theory of Non-Cooperative Computation
Towards a General Theory of Non-Cooperative Computation (Extended Abstract) Robert McGrew, Ryan Porter, and Yoav Shoham Stanford University {bmcgrew,rwporter,shoham}@cs.stanford.edu Abstract We generalize
More informationCopyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems
Chapter One Introduction 1.1 Large-Scale Interconnected Dynamical Systems Modern complex dynamical systems 1 are highly interconnected and mutually interdependent, both physically and through a multitude
More informationLecture Notes on Inductive Definitions
Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 August 28, 2003 These supplementary notes review the notion of an inductive definition and give
More informationLINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:
LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides
More informationA Parallel Evolutionary Approach to Multi-objective Optimization
A Parallel Evolutionary Approach to Multi-objective Optimization Xiang Feng Francis C.M. Lau Department of Computer Science, The University of Hong Kong, Hong Kong Abstract Evolutionary algorithms have
More informationA grid model for the design, coordination and dimensional optimization in architecture
A grid model for the design, coordination and dimensional optimization in architecture D.Léonard 1 and O. Malcurat 2 C.R.A.I. (Centre de Recherche en Architecture et Ingénierie) School of Architecture
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More informationPractical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software
Practical Algebra A Step-by-step Approach Brought to you by Softmath, producers of Algebrator Software 2 Algebra e-book Table of Contents Chapter 1 Algebraic expressions 5 1 Collecting... like terms 5
More informationA booklet Mathematical Formulae and Statistical Tables might be needed for some questions.
Paper Reference(s) 6663/01 Edexcel GCE Core Mathematics C1 Advanced Subsidiary Quadratics Calculators may NOT be used for these questions. Information for Candidates A booklet Mathematical Formulae and
More informationFundamental Theorems of Optimization
Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave
More information1 Overview. 2 Learning from Experts. 2.1 Defining a meaningful benchmark. AM 221: Advanced Optimization Spring 2016
AM 1: Advanced Optimization Spring 016 Prof. Yaron Singer Lecture 11 March 3rd 1 Overview In this lecture we will introduce the notion of online convex optimization. This is an extremely useful framework
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15
Fundamentals of Operations Research Prof. G. Srinivasan Indian Institute of Technology Madras Lecture No. # 15 Transportation Problem - Other Issues Assignment Problem - Introduction In the last lecture
More informationFurther Applications of the Gabbay-Rodrigues Iteration Schema in Argumentation and Revision Theories
Further Applications of the Gabbay-Rodrigues Iteration Schema in Argumentation and Revision Theories D. M. Gabbay and O. Rodrigues 2 Department of Informatics, King s College London, Bar Ilan University,
More informationOn equivalent characterizations of convexity of functions
International Journal of Mathematical Education in Science and Technology, Vol. 44, No. 3, 2013, 410 417 (author s reprint) On equivalent characterizations of convexity of functions Eleftherios Gkioulekas
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationDevelopment of a Cartographic Expert System
Development of a Cartographic Expert System Research Team Lysandros Tsoulos, Associate Professor, NTUA Constantinos Stefanakis, Dipl. Eng, M.App.Sci., PhD 1. Introduction Cartographic design and production
More informationDESIGN AND ANALYSIS OF ALGORITHMS. Unit 6 Chapter 17 TRACTABLE AND NON-TRACTABLE PROBLEMS
DESIGN AND ANALYSIS OF ALGORITHMS Unit 6 Chapter 17 TRACTABLE AND NON-TRACTABLE PROBLEMS http://milanvachhani.blogspot.in COMPLEXITY FOR THE IMPATIENT You are a senior software engineer in a large software
More informationUnit 6 Chapter 17 TRACTABLE AND NON-TRACTABLE PROBLEMS
DESIGN AND ANALYSIS OF ALGORITHMS Unit 6 Chapter 17 TRACTABLE AND NON-TRACTABLE PROBLEMS http://milanvachhani.blogspot.in COMPLEXITY FOR THE IMPATIENT You are a senior software engineer in a large software
More informationNear-Potential Games: Geometry and Dynamics
Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics
More informationApplications of Differentiation
MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Module9 7 Introduction Applications of to Matrices Differentiation y = x(x 1)(x 2) d 2
More informationCharacterization of Semantics for Argument Systems
Characterization of Semantics for Argument Systems Philippe Besnard and Sylvie Doutre IRIT Université Paul Sabatier 118, route de Narbonne 31062 Toulouse Cedex 4 France besnard, doutre}@irit.fr Abstract
More informationRESPONSE SURFACE METHODS FOR STOCHASTIC STRUCTURAL OPTIMIZATION
Meccanica dei Materiali e delle Strutture Vol. VI (2016), no.1, pp. 99-106 ISSN: 2035-679X Dipartimento di Ingegneria Civile, Ambientale, Aerospaziale, Dei Materiali DICAM RESPONSE SURFACE METHODS FOR
More information