UNIVERSITY OF CALGARY. A Method for Stationary Analysis and Control during Transience in Multi-State Stochastic. Manufacturing Systems

Size: px
Start display at page:

Download "UNIVERSITY OF CALGARY. A Method for Stationary Analysis and Control during Transience in Multi-State Stochastic. Manufacturing Systems"

Transcription

1 UNIVERSITY OF CALGARY A Method for Stationary Analysis and Control during Transience in Multi-State Stochastic Manufacturing Systems by Alireza Fazlirad A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY GRADUATE PROGRAM IN MECHANICAL AND MANUFACTURING ENGINEERING CALGARY, ALBERTA APRIL, 2015 ALIREZA FAZLIRAD 2015

2 Abstract Ever-increasing complexity and swift changes in market and consumer requirements have become the defining traits of manufacturing in our modern day. In response to these features, the study of manufacturing requires models and control strategies that reflect this complexity and accommodate market changes and consumer needs. As a result, performance analysis and control of complicated stochastic manufacturing systems, and the investigation of their transient behavior - identified as a key research area, are in need of further in depth study. This thesis investigates the design and operation of manufacturing systems by adopting a Markov chain model for stochastic multi-state manufacturing systems. Different applications of the model, including a supply and demand system, are introduced and a methodology is developed and verified that determines the solution for its steady-state performance. Methods proposed to date for stochastic or deterministic optimal control of manufacturing systems are not conducive to the analysis or control of transient behaviour, nor to study of their transient behavior. Applying model predictive control techniques directly to the Markov chain supply and demand model are shown to be a viable alternative for control and study of transient behavior. By modifying the initiation of production as probabilities within the Markov chain, the system was shown to be controllable to specific expected performance levels during transient operation. Taking advantage of the features of Markov chains, detailed analysis has shown that control improves the transient behavior of the system. ii

3 Acknowledgements I would like to thank my supervisor Dr. Theodor Freiheit for his support over the years. Without his insight and input, this thesis would not have been possible. I would like to thank Dr. Qiao Sun for her support, advice and encouragement. I would like to thank my cousin and friend Amir for the material, mental and emotional support he selflessly gave me throughout. Last but not least, I would like to thank my family, whose love was the light that shone through the darker times. iii

4 Dedication To my father Hossein, whose memory lives with me every day of my life, and to my mother Hamideh, to whom I owe everything good about me. iv

5 Table of Contents Abstract... ii Acknowledgements... iii Dedication... iv Table of Contents...v List of Tables... ix List of Figures and Illustrations... xiii List of Symbols, Abbreviations and Nomenclature... xvii CHAPTER ONE: INTRODUCTION Scope and objectives Manufacturing systems performance Manufacturing systems control Research Questions Thesis outline...6 CHAPTER TWO: LITERATURE REVIEW Models of manufacturing systems with performance estimation Two machine models Multi-state two machine models Serial lines Complex systems Manufacturing systems control Transient analysis...19 CHAPTER THREE: GENERALIZED SOLUTION METHODOLOGY FOR STEADY STATE PROBABILITIES OF TWO-MACHINE, ONE BUFFER PROBLEMS WITH MULTIPLE SEQUENTIAL FAILURES Model development An introduction to two machine models of manufacturing systems performance A simple two-machine Markov chain model An introduction to machines with multiple failure modes Two machine model with multiple parallel failures Two machine model with multiple sequential failures Model and notation Transition equations Solution methodology Initial Solution to internal equations Solution to the characteristic polynomial equation Solution to the boundary equations and Coefficients Performance measures Solution algorithm Numerical results...47 v

6 3.5.1 System with four upstream and five downstream poles System with five upstream and four downstream poles System with five identical upstream and downstream failed states Concluding remarks...63 CHAPTER FOUR: GENERALIZED SOLUTION METHODOLOGY FOR STEADY STATE PROBABILITIES OF TWO-MACHINE, ONE BUFFER PROBLEMS WITH MULTIPLE GENERAL FAILURES Model development Steady-state equations Solution methodology Initial solution to internal equations Solution to the simultaneous eigenvalue equations Approximate Poisson distribution Wait/repair state with a Poisson demand Different operational mean times with Poisson Demand Mixed products with Poisson demand General transitions Poles and points of singularity Real roots and intersections Complex roots Tolerances and avoiding duplicate roots Solution to the boundary equations and C s Performance measures Complete solution algorithm Identical machines Numerical results General transitions Approximate Poisson distribution Upstream Poisson rate 2, downstream Poisson rate Upstream Poisson rate 4, downstream Poisson rate Upstream Poisson rate 2.6, downstream Poisson rate Wait and repair states with Poisson demand Different operational mean times with Poisson Demand Mixed products with Poisson Demand Mixed products upstream and downstream Multiple sequential failures solution from general failures Comparison of computer run time between exact and analytical solutions Effect of buffer capacity on buffer level and service rate Concluding remarks CHAPTER FIVE: APPLICATION OF MODEL PREDICTIVE CONTROL TO A MARKOV-CHAIN BASED STOCHASTIC TRANSIENT MANUFACTURING MODEL Basic model predictive control formulation vi

7 5.2 Applying model predictive control to a Markov chain-based manufacturing model The Markov chain model Expected number of visits to Markov chain states Basic model predictive control formulation Output equations Cost function Constraints Solution process Numerical results Control to set points Distribution of outputs based on starting states Transient properties Settling time Demand loss Cumulative service rate error Concluding remarks CHAPTER SIX: CONCLUSION Multiple sequential failures system contribution Multiple general failure systems contribution and limitations Transient control of a Markov chain model contribution and limitations Future work REFERENCES APPENDIX A: MARKOV CHAIN TRANSITION EQUATIONS, MULTIPLE SEQUENTIAL FAILURES APPENDIX B: DERIVATION OF THE CHARACTERISTIC POLYNOMIAL, MULTIPLE SEQUENTIAL FAILURES APPENDIX C: EQUATIONS FOR COEEFICIENTS, MULTIPLE SEQUENTIAL FAILURES APPENDIX D: MARKOV CHAIN TRANSITION EQUATIONS, MULTIPLE GENERAL FAILURES APPENDIX E: MATRIX REPRESENTATION OF INTERNAL EQUATIONS, MULTIPLE SEQUENTIAL FAILURES APPENDIX F: DERIVATION OF SIMULTANEOUS EIGENVALUE EQUATIONS, MULTIPLE SEQUENTIAL FAILURES APPENDIX G: DERIVATION OF COEFFICIENTS OF LINEAR EQUATIONS FOR BOUNDARY PROBABILITIES vii

8 APPENDIX H: MARKOV CHAIN EQUATIONS, WAIT REPAIR MODEL WITH POISSON DEMAND APPENDIX I: DERIVATION OF PREDICTIVE STATE-SPACE EQUATIONS viii

9 List of Tables Table 3-1- Transient states Table 3-2- Location and existence of real roots for a system with multiple sequential failure modes Table 3-3- Transition probabilities for a system with multiple sequential failures with four upstream and four downstream failed states Table 3-4- Poles and roots of characteristic polynomial Table 3-5- Comparison of analytical and numerical performance measures Table 3-6- Transition probabilities for a system with four upstream and five downstream failed states Table 3-7- Roots and poles of characteristic polynomial Table 3-8- Comparison of analytical and numerical performance measures for the system of Table Table 3-9- System probabilities for a system with five upstream and four downstream failed states Table Roots and poles of characteristic polynomial Table Analytical and numerically found performance measures for the system of Table Table Transition probabilities for a system with five identical failed states Table Roots and poles of characteristic polynomial Table Analytical and direct solution performance measures for the system of Table Table 4-1- Transient states Table 4-2- Transition probabilities and properties for a system with general transitions Table 4-3- Roots for the system of Table Table 4-4- Analytical and direct comparison of performance measures for system of Table ix

10 Table 4-5- Comparison of results from analytical solution and simple discrete event simulation Table 4-6- Some system properties for a general transitions system with a negative downstream pole Table 4-7- Real and complex roots of eigenvalue equations Table 4-8- Comparison of performance measures found analytically and directly from the Markov chain Table 4-9- System parameters for a system with identical machines and general transitions Table Roots of simultaneous eigenvalue equations Table Comparison of performance measures Table Transition probabilities for a system with Poisson approximating machines with upstream Poisson rate 2 and downstream Poisson rate Table System properties for Poisson approximating machines with upstream Poisson rate 2 and downstream Poisson rate Table Roots of simultaneous eigenvalue equations Table Comparison of analytical and direct solutions to performance measures Table System properties for a system with upstream Poisson rate 4, downstream Poisson rate Table Real and complex roots of simultaneous eigenvalue equations Table Comparison of steady-state performance measures Table System parameters for upstream Poisson rate 2.6, downstream Poisson rate Table Roots of simultaneous eigenvalue equations Table Steady-state performance measures from analytical and direct Markov chain solutions Table Properties for a manufacturing system with a wait/repair upstream machine and downstream Poisson demand Table Roots of simultaneous eigenvalue equations for a system with wait/repair upstream and Poisson downstream x

11 Table Steady-state performance measures Table Properties of a Markov chain system with an upstream machine with different operational mean times and a Poisson demand downstream Table Roots of simultaneous eigenvalue equations Table Performance measures found from analytical and direct Markov chain solution methods Table Roots of simultaneous eigenvalue equations Table Properties of system with mixed products upstream and Poisson downstream Table Performance measures found from analytical and direct Markov chain solution methods Table Characteristics of example system with mixed products upstream & downstream 143 Table Real and complex roots of simultaneous eigenvalue equations Table Comparison of performance measures Table Comparison of performance measures from methodologies of general failures and sequential failures Table System properties of an up/down machine with Poisson demand and varying buffer capacity Table 5-1- Input values and parameters for a buffer and service rate control problems Table 5-2- Input values and system parameters, comparison of controlled and uncontrolled settling times when controlling buffer and service rate Table 5-3- System and control settings exploring the effect of buffer size on settling times Table 5-4- System and control settings for the study of effect of efficiency on buffer settling times Table 5-5- Comparison of controlled and equivalent uncontrolled cumulative service rate error when controlling buffer Table A-1- Lower boundary equations Table A-2- Internal Equations Table A-3- Upper boundary transition equations xi

12 Table D-1- Lower boundary equations Table D-2- Internal transition equations Table D-3- Upper boundary equations Table H-1- Transient states Table H-2- Lower boundary transition equations Table H-3- Internal equations Table H-4- Upper boundary transition equations xii

13 List of Figures and Illustrations Figure 1-1- The basic two-machine model and its Markov chain representation of states... 3 Figure 3-1- A schematic model of the two machine one buffer manufacturing system Figure 3-2- A simple two-state Markov chain model of a machine Figure 3-3- Markov chain model of a two machine system with multiple parallel failure states. 27 Figure 3-4- Markov chain of machine states and transition probabilities for upstream and downstream machines in a two machine system with multiple sequential failure modes Figure 3-5- Example plot of the characteristic polynomial with four upstream failed states and four downstream failed states Figure 3-6- Example plot of the characteristic polynomial with five upstream failed states and five downstream failed states Figure 3-7- Algorithm flowchart solving roots of characteristic polynomial Figure 3-8- Steady-state probabilities calculated from the analytical method Figure 3-9- Steady-state probabilities calculated from a direct numerical Markov chain solution Figure Percentage of absolute difference in probabilities calculated from analytical and numerical methods Figure Pareto chart of maximum differences between analytical method and direct Markov chain solution for the system of Table Figure Steady-state probabilities calculated analytically for the system of Table Figure Pareto chart of the difference between the analytical method and direct numerical Markov chain solution Figure Steady-state probabilities calculated analytically for the system of Table Figure Steady-state probabilities calculated numerically for the problem of Table Figure 4-1- Markov chain model of a two-machine system with multiple general failures Figure 4-2- Upstream and downstream Markov chains representing Poisson distributions Figure 4-3- Markov chain of system with wait and repair states and Poisson approximated demand xiii

14 Figure 4-4- Upstream machine with different operational mean times and Poisson downstream states Figure 4-5- System with mixed products upstream, Poisson demand downstream Figure 4-6- Real upstream and inverse downstream eigenvalues as a function of A for a system with general transitions. Upstream pole is smaller than downstream pole Figure 4-7- Real upstream and inverse downstream eigenvalues as a function of A for a system with general transitions. Upstream pole is larger than downstream pole Figure 4-8- Eigenvalues for a system with mixed products upstream and back and forth transitions downstream. Upstream pole is smaller than downstream pole Figure 4-9- Eigenvalues for a system with mixed products upstream and back and forth transitions downstream. Upstream pole is larger than the negative downstream pole Figure Upstream and inverse downstream eigenvalues for a system with Poissonapproximating upstream and downstream machines Figure Simplified algorithm flowchart for real roots and intersections Figure Simplified flowchart of complex roots solution process Figure Steady-state probabilities for the system of Table Figure Pareto chart comparison of direct and analytical steady-state probabilities in Table Figure Steady-state probabilities found solving the system of Table Figure Pareto chart of difference between analytical and directly found steady-state probabilities Figure Analytical steady-state probabilities for a system with identical machines and general transitions Figure Upstream and inverse downstream eigenvalues for a problem with Poisson approximating machines with upstream Poisson rate 2 and downstream Poisson rate Figure Pareto chart of difference between analytical and direct methods Figure Upstream and inverse downstream eigenvalues for a system with upstream Poisson rate 4, downstream Poisson rate Figure Pareto chart of difference between the analytical and direct Markov chain solution methods xiv

15 Figure Contour plot of a function of real and imaginary components for 5 th upstream and 3 rd inverse downstream eigenvalues (upstream Poisson rate 2.6, downstream Poisson rate 2.8) Figure Contour plot of a function of real and imaginary components for 6 th upstream and 5 th inverse downstream eigenvalues (upstream Poisson rate 3.8, downstream Poisson rate 5). Root is slightly off the circle Figure Contour plot of a function of real and imaginary components for 2 nd upstream and 5 th inverse downstream eigenvalues (upstream Poisson rate 2.6, downstream Poisson rate 5.2). Root is inside the circle Figure Pareto chart of the difference between analytical and direct Markov chain solutions for a system with upstream Poisson rate 2.6 and downstream Poisson rate Figure Pareto chart of errors between the analytical and direct Markov chain methods Figure Pareto chart of maximum errors between direct and analytical solutions for machine states at all buffer levels Figure Markov chain model for a system with both machines representing mixed products production Figure Maximum errors between direct and analytical solutions for machine states at all buffer levels Figure Comparison of computer run times for exact and analytical solutions as a function of buffer sizes Figure Average service rate as a function of inventory capacity for a system with a simple up and down machine upstream and Poisson demand downstream Figure Average inventory content as a function of inventory capacity Figure 5-1- A simple illustration of model predictive control concept [48] Figure 5-2- Markov chain model of a system with wait and repair machine upstream and Poisson demand downstream Figure 5-3- Simplified flowchart of predictive control calculation process Figure 5-4- Controlling system of Table 5-1 for an inventory level of Figure 5-5- Development of control variables when controlling expected inventory Figure 5-6- Controlling service rate to xv

16 Figure 5-7- Development of control variables when controlling service rate Figure 5-8- Buffer level progress toward set point with three different starting states Figure 5-9- Distribution of buffer levels based on starting states at time step t = 15. Target buffer level is Figure Service rate progress toward set point at three different starting states Figure Distribution of service rates based on starting states at time step t = 15. Target service rate is Figure Comparison of controlled and uncontrolled settling times when controlling buffer levels Figure Comparison of controlled and uncontrolled settling times when controlling service rates Figure Effects of buffer size and set point on settling time Figure Controlled buffer settling time against upstream efficiency at several failure probabilities Figure Buffer settling time against upstream initial efficiency at several repair probabilities Figure A comparison of demand loss across a range of demand rates between a controlled system and equivalent uncontrolled systems Figure Average demand loss at different demand rates compared between a controlled system and equivalent uncontrolled systems Figure Comparison of controlled and uncontrolled demand loss at several demand rates when controlling service rate Figure Controlled and uncontrolled average demand loss when controlling service Figure Effect of the ratio of buffer set point to size on demand loss when controlling buffer Figure Controlled and uncontrolled cumulative service rate errors when controlling service rate xvi

17 List of Symbols, Abbreviations and Nomenclature Symbol n N α 1 α 2 u i d j s t p u i p d j u p ij d p ij r u i r d j P u P d R u R d Z u Z d P YY (n) P ΔΔ (n) P Y (n) P ΔY (n) Definition Buffer (inventory) content Buffer capacity State of upstream machine State of downstream machine Non-operational state of upstream machine Non-operational state of downstream machine Number of upstream non-operational states Number of downstream non-operational states Probability of transition from operational state to upstream machine s ith non-operational state Probability of transition from operational state to downstream machine s jth non-operational state Probability of transition between non-operational states i and j for the upstream machine Probability of transition between non-operational states i and j for the downstream machine Probability of transition from the upstream machine s ith non-operational state to its operational state Probability of transition from the downstream machine s jth non-operational state to its operational state Row vector of p u i Row vector of p d j Column vector of r u i Column vector of r d j u Matrix of p ij d Matrix of p ij Matrix of steady state probabilities with n parts in the buffer and both machines operational Matrix of steady state probabilities with n parts in the buffer and both machines non-operational Row vector of stationary probabilities of states with n parts in the buffer, upstream machine operational, downstream machine nonoperational Column vector of stationary probabilities of states with n parts in the buffer, upstream machine non-operational, downstream machine operational xvii

18 List of Symbols, Abbreviations and Nomenclature (continued) X, U i, D j, C Υ Δ u 1 u 2 p u r u p d 1 p d ij r d j k X X U u z Y C z T H p H u B 0 B 1 U B I C zz Z(k) ΔU(k) Constants for steady state probability calculation Upstream machine matrix Downstream machine matrix Probability of entering wait state Probability of remaining in wait state Probability of entering repair state Probability of returning from repair state to operational state Probability of transition into first downstream non-operational state Probability of transition between non-operational downstream states Probability of transition from non-operational downstream states to operational downstream state Time step Matrix of expected number of visits to all Markov chain states Matrix of total number of visits to all Markov chain states Control variables arranged in a matrix control variables in a vector Output vector (inventory level or Service Rate) Measured outputs Matrix of coefficients for state to output transformation Markov chain transition matrix Prediction horizon Control horizon Constant component of transition matrix decomposition Coefficients of control variables in the transition matrix decomposition Predictive values Matrix of predictive change in control variables Transition matrix with the last known control variable Transition matrix replicated and tiled to size C z matrix tiled and replicated Vector-matrix of predictive outputs Vector-matrix of predictive control matrices xviii

19 List of Symbols, Abbreviations and Nomenclature (continued) E(k) r Q R V(k) Z d F, F 1, f λ μ Δ Δ System response if no change made to control variables Reference trajectory Weighing matrix for output Weighing matrix for controls Cost function d Matrix of p ij Shorthand for constraint matrices and vectors Poisson parameter Demand rate Demand loss Average demand loss xix

20 Chapter One: Introduction The importance of manufacturing in the modern era needs little emphasis. As manufacturing systems become increasingly complex with advancing technology, and the competition for on-time delivery of high-quality products intensifies, new models must be developed for complex systems, and new paradigms put forward for controlling these systems. This points to a need for a better understanding of manufacturing systems and for new control structures that respond to constantly changing customer requirements. This thesis is concerned with two related areas in the study of manufacturing; first is production systems performance analysis which seeks to quantify and analyze the performance of a production system, usually considering stochastic breakdowns. Second is the control of production which seeks to maintain control over the stream of products to satisfy consumer demand while maintaining a certain level of performance. 1.1 Scope and objectives This thesis seeks to unify two areas of manufacturing systems, that of performance analysis and that of production control, using a single model based on a Markov chain that addresses transients in production. To investigate the potentials of this unification, a twomachine, multi-state Markov chain model is studied that can be used as an interpretation of a supply and demand system. This overall goal leads to two specific objectives. The first objective is to find a solution to this multi-state Markov chain model such that manufacturing system performance measures can be characterized in a manner that is suitable for manufacturing systems analysis. In order to characterize and frame the transient response, a steady-state solution is also required. This solution is developed for two multiple failure state 1

21 models. The first is a simpler model which introduces the solution concept and methodology. The second model extends that concept to a final, more complex model that can be used in control and transients analysis. The second objective is to extend the idea of model predictive control to a probabilistically evolving Markov chain built on the same two-machine multi-state Markov chain for which a steady-state solution was determined. This approach results in the identification of production rates to meet demand and provides a convenient model that retains information on system evolution such that transients can be better analyzed. The following two sections elaborate on these objectives Manufacturing systems performance One part of the work in this thesis is focused on performance analysis of buffered two machine systems modelled by multi-state discrete time Markov chains. It considers two unreliable (subject to random breakdowns) machines with known processing times connected through a finite capacity buffer. For the purpose of analysis of performance, it is usually assumed that parts arrive from an unlimited supply point upstream of the first machine, are processed by the upstream (first) machine, are transferred to the buffer, removed from the buffer and further processed by the second (downstream) machine, and exit the line to an unlimited storage point downstream of the line. This two-machine system is shown in Figure 1-1 along with a representation of the type of Markov chain structure used to model their behavior in this thesis. 2

22 Figure 1-1- The basic two-machine model and its Markov chain representation of states To characterize the random breakdowns, the machines are assumed to have states. Machine states are usually assumed to be of the operational or failed ( up or down ) type. In the simple case, each machine has two states: one operational and one failed. In the more complicated cases, multiple states may be present. When the upstream machine is operational, it puts a part into the buffer. When the downstream machine is operational, it takes a part out of the buffer. An operational machine may become inoperable by entering a failed state. Transition between these machine states is a stochastic occurrence. If a machine goes from being operational to failed, it is referred to as a failure has occurred. If it goes from failed to operational, it is referred to as a repair has occurred. The objective is to find performance levels in steady state. Performance levels include production rate and buffer level. Since this system is modelled stochastically, production rate is 3

23 the average number of parts produced by the system and buffer level is the average number of parts that are in the buffer. Before a solution for performance levels can be developed, several assumptions are required to make the problem tractable. These include an assumption on processing times, on the timing of state changes and the removal and addition of parts from the buffer, and on the specifics of failures. Markov chains have been found to be suitable to the analysis of two machine systems for two main reasons. First, Markov chains can represent systems that transition stochastically into different states. Second, Markov chains have the basic property that the system state probability depends only on the immediate past and future state changes are independent of the history of system development. This is very useful in modelling how machines fail and get repaired. If the machines are modeled as Markov chains, long term performance measures can be derived from the Markov chain system s steady-state probability distribution. Standard numerical techniques for the solution of Markov chain steady-state probabilities are not recommended here for two reasons. First, as the buffer becomes larger the state-space of the Markov chain quickly becomes unmanageably large resulting in protracted computational time. Second, in order to analyze manufacturing systems with more than two machines, algorithms have been developed that use the two-machine solution recursively. Therefore it is widely accepted that solutions for performance analysis of the two-machine systems must be highly efficient computationally and therefore independent of buffer size. Therefore, the objective is to develop an analytical solution for the steady-state Markov chain probabilities independent of buffer size, especially for alternative system configuration models. 4

24 1.1.2 Manufacturing systems control The literature on manufacturing systems control is mostly limited to optimal control of rate of production to meet demand. The great majority of models offered in this area belong to the stochastic optimal control literature and do not take into account the performance of the manufacturing systems that are being controlled. Furthermore, no information is made available on the evolution of the system under control and therefore the transients of the system cannot be analyzed. Recognizing the importance of these issues, calls have been made to integrate the issues of control and performance analysis [1] and study the transient behavior of manufacturing systems [2] as it relates to issues of performance. Therefore, the second part of this thesis examines both of these issues using the buffered two machine system of Section as a foundation. In order to make the results intuitive, this buffered two machine system must be modified to represent a production inventory demand system for which a control policy must be developed that is capable analyzing transient behavior. In order to characterize performance under control, control is applied directly to the Markov chain, interpreting certain transition probabilities within the Markov model as control variables. A major difference between the control effort in this thesis and the literature on production control is that control is applied directly to the Markov chain, introducing a control state and interpreting probabilities of transition in that state as the rates of production. This enables the analysis of performance as control is applied. In addition, the Markov chain also supplies information on system evolution through time which enables the study of transients using the same model, as control is applied. Since the control model is built on the two-machine multi-state Markov chain, outputs are expected performance measures generated from Markov chain probabilities. The control system 5

25 therefore needs to be applicable to discrete time systems and generate control decisions based on future predicted behavior of the system, based on expectations generated from the model. Model predictive control was found to be the method that could satisfy both of these requirements. 1.2 Research Questions Based on the preceding discussion, this thesis addresses the following research questions: Can a solution independent of buffer size for performance analysis of stochastic buffered two-machine systems modelled by multi-state Markov chains be determined? Can a stochastic buffered two-machine system modelled by multi-state Markov chains be used to represent a production-inventory-demand system? How can control be directly applied to the Markov chain model of a productioninventory-demand system? Can transient behavior be identified, characterized, and analyzed in a multi-state Markov chain manufacturing system when control is applied? 1.3 Thesis outline The remainder of this thesis is organized as follows: Chapter 2 introduces the literature on manufacturing systems, reviewing work most closely related to this thesis. It is divided into three sections: in the first section, two machine models of manufacturing systems are reviewed in detail. It is shown that the multi-state Markov chain models studied in this thesis are not addressed in the literature. A brief review is also conducted on the most important works extending the two machine performance solution to longer and more complicated systems, emphasizing the need for efficient solutions that are independent of buffer size and capable of 6

26 being used in iterative algorithms. In the second section, the literature on control of manufacturing systems is reviewed, emphasizing that there is a lack of work that addresses the issue of how a system performs under control. In the third section, the literature on transient analysis is reviewed, pointing to a general lack of research in the area and an absolute lack of research directed at transient analysis under a control strategy. In Chapter 3, a method is developed for the steady-state performance analysis of a buffered unreliable two-machine system modeled by a Markov chain that has multiple failed states that transition sequentially. It is shown that an analytical solution can be developed independent of buffer size which relies on a characteristic polynomial with real and complex roots. A solution method is offered to determine the roots of the polynomial and examples are shown to demonstrate the application of the solution methodology. In Chapter 4, the same two machine system is studied with a more complicated Markov chain transition structure where all failed states can now transition to and from one another. Extending the methods of Chapter 3, it is shown that an analytical solution can again be developed, this time involving the solution of two simultaneous eigenvalue equations that have real and complex roots. A solution is developed to these eigenvalue equations that is capable of becoming more involved if the solution requires. Having found the solution to performance measures, numerical examples are provided to demonstrate how the model can represent practical manufacturing systems. Among these examples, a model is developed that represents a manufacturing plant as the upstream machine and a Poisson approximation of demand as the downstream machine. In Chapter 5, the model developed in Chapter 4 is represented as a production inventory demand system with a state that represent periods of controlled non-production with transition 7

27 probabilities that could be used to be a representation of production rates. A control strategy is developed based on model predictive control that finds these transition probabilities (in effect controlling production rate) to meet demand at an optimal cost. It is shown that by directly taking control to the Markov chain, not only are target levels achieved, but information can be extracted from the system as it evolves under control, allowing transient behavior to be studied. It is shown that control action results in improvement of transient behavior when compared with an uncontrolled system. Chapter 6 concludes and summarizes this work and offers some guidelines on continuing the research reported. 8

28 Chapter Two: Literature Review This chapter reviews manufacturing systems literature that is most closely related to the work reported in this thesis. For better classification, this review is divided into three sections. The first section addresses work on manufacturing system models capable of performance analysis with a focus on Markov chain models. The second section reviews research on manufacturing system control, mostly in the stochastic optimal control area, and the third section looks at previous work on the study of transients in manufacturing systems. This review will be limited to analytical and stochastic models and simulation and other numerical methods are not included. 2.1 Models of manufacturing systems with performance estimation Estimation of performance is essential to the design, operation, and improvement of manufacturing systems. Perhaps the most important measure of performance in a manufacturing system is throughput (also called production rate). Efforts to estimate manufacturing line throughput date from the 1960 s. However, exact analytical results exist only for the most basic of the manufacturing lines; a system with two machines and a limited capacity buffer and only a fraction of these two machine models have a closed form expression for throughput. This section first outlines some of the two machine models of manufacturing analysis to which this thesis pertains. Review is limited to analytical and mostly stochastic models. A brief review follows of a small selection of methods that extend the two machine performance analysis to longer lines with multiple machines or to more complex arrangement of machines. 9

29 2.1.1 Two machine models In order to review two machine manufacturing models, some categorization is required. In terms of reliability models assumed, two machine models with throughput analysis have been categorized into discrete or continuous models [3]. Discrete machines (also mostly called synchronous) have the same cycle time and machine states change at the beginning or end of the time cycle. They are (for the most part) represented by a geometric reliability model where the downtimes and uptimes are geometric random variables. Continuous models (also called asynchronous) have different cycle times and are represented by an exponential reliability model, where machine downtimes and uptimes are exponential random variables. Derivation of these uptimes and downtimes has been carried out in [4] and [5]. In the case of synchronous systems, processing times are known in advance, while asynchronous systems have known service rates. A third type of reliability model is proposed in [6] where machines are assumed to have an independent probability of being up or down at every time step, comparable to a Bernoulli random variable. Another classification is in the structure of machine breakdowns. Two machine models have been categorized as having operation-dependent failures or time-dependent failures. Operation-dependent failures occur only when the machine is processing a part and depend on the number of operations performed since its last repair. Time-dependent failures depend on the time elapsed since the last repair, therefore a machine with such failure characteristics can break down even when it is not processing a part. Buzacott [7] is generally credited with having started the line of research into two machine performance analysis by developing a two machine model with the assumption of a geometric distribution for the times between failures and the time to repair, as well as time 10

30 dependent failures. His model is based on the use of Markov chains describing the state of the machines as a random variable which, at each time step (scaled to processing time), is dependent on the state at the previous time step. The steady state production rate is then found as a closedform function of failure and repair probabilities. In extending Buzacott s work in [7] to develop an operation-dependent failures model, Buzacott and Shanthikumar [8] have modified the timedependent failures assumption to prevent failure of machines when blocked or starved. Other assumptions remain the same. A closed form expression for throughput can again be found in terms of failure and repair probabilities. Ignall and Silver [9] use the method of Buzacott [7] to obtain an approximate output rate for a two stage production line with one machine in each stage and offer the possibility of more than one machine per stage. Shanthikumar and Tien [10] include the possibility that parts can be scrapped when the stage (machine) processing those parts fails, by allowing probabilities for scrapping of the part or the completion of the processing if the machine becomes operational again. This is an operation-dependent failures model that finds an exact analytical production rate. Jafari and Shanthikumar [11] address the same problem of possible scrapping of parts when a machine fails. However, instead of the usual geometric distribution of up- and down-times, they have used a phase-type distribution of the up- and down-times of the two machines that can be used to approximate any type of distribution. Gershwin s discrete time, operation-dependent failure model [4], on which most of the subsequent work in thesis is based, is a Markov chain representation of machine and buffer states. This model relies on steady-state system probabilities from which throughput and other performance measures can be extracted. Gershwin proposes an analytical technique to solve the Markov chain equations of steady-state probabilities. This is due mainly to the fact that with 11

31 large buffers, a numerical solution of Markov chain equations becomes computationally inefficient. Throughput and buffer level, as well as starvation and blockage probabilities, to be defined in following chapters, are found from this solution. This model has become the foundation for many following works with extensions to continuous machines, multiple machine states, and longer lines with several machines. Gershwin [4] also proposes a continuous model with operation-dependent failures where the mean times between failure and the mean time to repair, as well as machine service times, are all exponential random variables. The model is developed as a continuous-time, discrete state Markov process. The solution is again obtained from balance equations derived from the Markov process and an analytical assumption on the form of the steady-state probability distribution. From the steady-state probabilities, performance measures are extracted. Continuous two machine models have also been proposed in non-markovian structures. One example is the time-dependent failures model with Bernoulli machines which offers a closed form expression for throughput [12]. A survey of the different two machine models with performance analysis, and a comparison of the throughput found from each, is reported in [3]. Gebennini et. al. [13] offer a new perspective on the two-machine one buffer system by introducing a restart policy aiming to reduce the blocking frequency of the first machine. This restart policy consists of forcing the first machine to remain idle (it cannot process parts) each time the buffer gets full until it empties again. The basic solution process is the same as [4]; developing an analytical solution to a Markov chain system. A more recent effort by Gebennini and Gershwin [14] focuses on the modeling of quality in the two machine models by introducing a waste model which represents machine states as quality states. 12

32 2.1.2 Multi-state two machine models The models outlined so far consider only two modes (or states) for the machines; failed and operational. An important extension of these two-machine models is the introduction of multiple failure modes, where the machines are allowed to fail in different ways, representing practical situations that cannot be modeled with simple up and down systems. The importance of analyzing the performance of systems with multiple states is further emphasized by Li et al. [2], who detail future research topics that are of interest to automotive and other manufacturing industries given the level of complexity necessary for descriptive models required in these industries. One way these multiple states are configured is through what is termed in this thesis as parallel failed states. This configuration assumes one operational state and several failed states with Markov chain transitions possible only between the operational state and each of the failed states. This model is analyzed by Tolio et. al. [15]. This is a discrete-time model and each failure mode is characterized by failure and repair probabilities, generating specific mean times to repair and mean times to failure that are geometrically distributed. The solution is based on Gershwin s discrete single up and down model [4]. The authors explore the transition equations and make an assumption on the form of the steady-state probability distribution with unknown parameters which can be solved by back substitution. Using the same approach and a very similar solution methodology, Levantesi, Matta and Tolio [16] analyzed the multiple parallel failed states problem in continuous time. The solution process is a direct extension of the simple up and down machine states. To generalize the work of Tolio et. al. [15], an attempt has been reported in the literature [17] to analyze a two machine model with multiple states with general Markov chain transitions. This is a discrete system 13

33 with multiple up and multiple down states where transition is possible between any two states. The authors offer a method to model and characterize the solution methodology, but do not offer a solution. More recent efforts in the analysis of multi-state two machine systems have focused on approximating the movement of parts in the system as a continuous flow of fluids. This gives rise to a continuous time, discrete and continuous state Markov process, the steady-state distribution of which is found and used to derive performance measures. Gershwin and Tan, [18] and [19], analyze a continuous-flow system with two stages and one buffer with different processing rates associated with each stage. They use a solution methodology that is conceptually similar to [4] to analyze systems, including an example where each stage has a number of identical machines in parallel or in series. Tolio [20] and Tolio and Ratta [21] introduce the idea of generalized thresholds, i.e., buffer levels above or below which the system can behave differently. This model provides a method to analyze a wide range of two-machine systems, including batching machines, by introducing the notion of control by means of thresholds, which fits the proposal in [1] of unifying control and performance analysis of manufacturing systems Serial lines One major application of the two machine models described so far is in the analysis of serial production lines (also called flow lines or transfer lines) with multiple machines. Since exact solutions for performance cannot be found, algorithms have been developed that use performance measures of the two-machine models to approximate performance measures for lines with multiple machines. These algorithmic methods fall within two general categories: aggregation and decomposition [2]. The concept of aggregation is detailed in [22] and [5], and 14

34 summarized in [2]. It sequentially replaces every two machines of the line with a single aggregated machine that has the same throughput as the two machine system. The idea of decomposition is to break a multiple-machine serial line down to a series of two-machine systems. Described in detail by Dallery and Gershwin in [23] and Gershwin [4], decomposition is an iterative method that relies on isolating every two machine, one-buffer system in the line and creating a system of linear equations, the solution of which finds the line s performance measures Complex systems More complex machine arrangements, including assembly lines, parallel systems, closed loops, and split and merge systems have been analyzed using the two-machine models discussed earlier and variations of the two aggregation and decomposition algorithms. A survey of research in each area has been carried out by Li et. al. [2] and earlier by Dallery and Gershwin [23]. Performance analysis of most manufacturing system models relies on the solution of a two-machine system which is invoked in iterative procedures. As such, the two-machine models are often referred to as building blocks of manufacturing systems analysis. It is therefore crucial that these models are solved in a way that is independent of the size of the buffer to ensure computational efficiency and accuracy. 2.2 Manufacturing systems control This section reviews literature on control of manufacturing systems. Since there is a vast body of work in production planning that is generally categorized as production control, this review is limited to works that are most closely related to a topic addressed in this thesis: controlling the optimal rate of production of a manufacturing system to meet demand. As 15

35 outlined below, these works are mostly in the optimal control domain and the majority are stochastic optimal control problems. In its most basic form, the problem of optimal control of production seeks to find a rate of production that minimizes cost while satisfying demand. Many variations exist in the assumptions and models for production, cost, and demand. Problem formulation may consider combinations of both stochastic or deterministic production and/or demand, and costs associated with production, holding inventory, backlogged demand, or some combination thereof. The production system may consist of a single machine or multiple machines, where the machines may have single or multiple failed and operational states. The production system may also produce a single product or a number of products. A survey of the most important optimal and hierarchical controls in stochastic manufacturing systems appears in [24]. This survey classifies the form of the objective function, solution methodologies, the structure of the system being controlled, e.g., single versus multiple machines and/or failure modes, and methods to reduce the complexity of the structure, e.g., aggregation-disaggregation or replacing random processes in a manufacturing system by their averages and/or other moments. In all cases, optimization of a cost function was addressed through modification of the manufacturing system s production rate. The earliest formulation of the production planning problem has been attributed to Modigliani and Hohn [25]. They studied a convex production planning problem where a schedule is set in discrete time to satisfy demand requirements and minimize the total discounted cost of holding inventory and production. Sethi and Thompson [26] extended the deterministic optimal control problem to continuous time with an objective function penalizing deviations of both production and inventory from target levels. A rigorous mathematical analysis of the same 16

36 problem is carried out in [27]. Akella and Kumar [28] further extended the problem to a system consisting of a single machine producing a single product with up and down states, controlling for costs associated with inventory surplus and backlog, but not for production. In all three of these papers, closed form solutions were obtained for the problem. Kimemia and Gershwin [29] and Sharifnia [30] extended the problem to multiple machines or to allow for multiple failed states. Kimemia and Gershwin study a more general problem for a flexible manufacturing system with multiple workstations consisting of identical machines that produce a family of parts. A control hierarchy is proposed for deterministic demand and inventory surplus and backlog costs. As with the following papers, this work did not find a practical, closed form solution due to the large dimension of the problem and therefore could not provide an optimal control policy. The authors instead propose suboptimal hierarchical control. A rigorous analysis of a similar problem appears in [4]. Sharifnia s work [30] was based on the hedging point policy of Kimemia and Gershwin and found all possible values of the hedging point by minimizing the average cost per unit time over. Presman and Sethi [31] and Sethi et al. [32] added stochastic flowshops to the production control problem. Presman and Sethi discuss the optimality of the average cost production control in a two-machine flowshop subject to state constraints. For a sufficiently large upper bound on the work-in-process, the authors prove that when the minimum average capacity of the machines is larger than the demand rate, and the finiteness of the long-run average cost is ensured. Sethi et al. discuss a multiple-machine flowshop with internal buffers, with the cost of production as the cost function and no state constraints on the buffer. Using hierarchical control, they show that the problem can approximated by a limiting problem where the machine availabilities are replaced by their equilibrium mean availability and the long-run average cost of the two problems 17

37 converge. However, only near optimal controls can be established for the limiting problem. These and other average cost models are studied in detail by Sethi et. al. [33] Boukas [34] develops an asymptotically stable control law for a single machine system that produces multiple part types with the assumption that production is inspected and a portion of parts are rejected and a portion of parts are returned. The machine is not stochastic but production rate is constrained. Departing from the objective of only controlling the manufacturing system, Gershwin [1] proposed a more rigorous application of controls for performance prediction when examining scheduling decisions. He formulated a dynamic programming problem where real time scheduling policies surplus-based and time-based are introduced based on the control point policy and explained the solution to the problem. Gershwin et al. [35] furthered this by incorporating random demand by studying a manufacturing firm that builds a product to stock to meet a stochastic Markovian demand. Using a continuous flow control model, as explained in the section on two machine models, they showed that the optimal production policy has a hedging point form, similar to previous policies discussed. A rather different line of research has been pursued by Lefeber [36], Roset [37] and van den Berg et. al [38]. In these works, the flow of products through a manufacturing system is modelled as a compressible fluid, based on models for traffic flow. The significance of this work is that a model predictive controller in continuous time was proposed with observer-based output feedback. The concept of model predictive control has also been employed in this thesis. One aspect that has remained substantially unaddressed in the production system control literature is the analysis and control of transient behavior when the system operates under a control policy. When the solution to an optimal production control problem is possible, the rules 18

38 offered for operating the production system are generally for steady state behavior without regard for its transient behavior. In fact, only recently have models started to take into account system performance under a control policy as suggested by Gershwin in [1], where a unified approach to control and performance estimation is recommended. Even then, no model has thus far been capable of addressing transient behavior under a control policy. 2.3 Transient analysis The problem of transients in manufacturing has received very little attention. Li et. al. cite transient analysis as a major research area within the general field of performance analysis [2]. They identify the identification of transient times, production losses, and factors that influence transient behaviour, as having practical importance in the modern manufacturing operation. The only works in the literature concerned with transient analysis are those of Narahri and Viswanadham [39], Mocanu [40], and Meerkov and Zhang [41], also covered in [5]. Narahri and Viswanadham outline several cases where transient analysis is of significance in buffer-less systems. They also illustrate the computation of times to absorption in Markovian models of manufacturing systems, as well as the cycle time distribution, as two ways to characterize transient behavior. Mocanu [40] studied the transient analysis of manufacturing systems by modeling the system as a stochastic fluid model, an emerging area with application in both telecommunications and industrial manufacturing. Meerkov and Zhang [41] analyze the transients of a manufacturing system in which machines are modeled by Bernoulli random variables using mathematical analysis for a simple two machine system and discrete event simulation for larger systems. Their work characterizes settling times of production rate and 19

39 work in process, as well as production losses during transients as two important metrics describing transient behavior. 20

40 Chapter Three: Generalized Solution Methodology for Steady State Probabilities of Two- Machine, One Buffer Problems with Multiple Sequential Failures This chapter presents a generalized analytical solution methodology for evaluating the performance of manufacturing systems consisting of two unreliable machines and a finite capacity buffer with multiple sequential failure modes for the machines. It builds on analytical methodologies solving, in various degrees of complexity, the problem of two unreliable machines and a finite capacity buffer to extend the problem to sequential failures where a solution does not currently exist in the literature. 3.1 Model development In this section, a model is developed of two-machine manufacturing systems with multiple failure modes. Section starts with an introductory discussion of the characteristics and assumptions of two-machine models. Section explains the development of a basic twostate two-machine model and the fundamental mathematical equations related to its solution. Section introduces the concept of two-machine system with multiple failures An introduction to two machine models of manufacturing systems performance Manufacturing system models seek to find steady-state performance measures for systems consisting of two unreliable machines. These include steady-state production rate and, if present, the mean buffer level, and in many cases including the one represented in this thesis, the probabilities of starvation and blockage. They can be discrete or continuous time, and based on the specifics of the model, can consider time-dependent or operation-dependent failures. Timedependent failures can occur at any time even when the machine is not operating. Operationdependent failures only occur when the machine is operating on a part; therefore, an operation- 21

41 dependent failure machine cannot break down when it is waiting for a part to arrive or waiting to deliver a part to its downstream machine. Discrete and continuous Markov chains [42] have been widely used in the analysis of two machine models. The reason is two-fold: First, Markov chains represent systems that transition stochastically into different states with a probability, in the discrete time case, or a rate, in the continuous time case. Second, Markov chains have the basic property that the system state probability depends only on the immediate past and is independent of the history of system development. In the discrete time case, using probability theory, the Markov chain probabilities of transitioning between states can be developed and conveniently put into matrix form. This matrix formation along with probability theory and theory of Markov chains is used to develop a system of linear equations that finds the steady state system probabilities. Performance measures are extracted from the steady-state probabilities. A numerical solution is possible to this system of linear equations, but it is not recommended for manufacturing systems for two main reasons: First, analytical solutions generally cannot be found for systems consisting of more than two machines, therefore, methods have been developed to aggregate and decompose a system of multiple machines into smaller two-machine systems, for example, [4] and [5] outline the procedure. These procedures frequently call on the solutions for the simpler two machine systems in iterative algorithms. Therefore solutions for performance analysis of two-machine systems must be highly efficient computationally. In addition, the existence of a buffer considerably increases the size of the Markov chain state-space, in which case the numerical solution becomes quickly inefficient and inaccurate. 22

42 As a result, a traditional numerical Markov chain solution of steady-state probabilities is not efficient for buffered manufacturing systems. Therefore, efforts in the solution of performance measures of buffered two-machine systems focus on the development of an analytical solution for the Markov chain steady-state probabilities that are independent of buffer size. The remainder of this chapter is focused on developing such a solution for a two-machine system with multiple failure modes. The emphasis is on discrete time models, but extension to continuous time is possible although not addressed in this thesis A simple two-machine Markov chain model This section introduces a simple two-state Markov chain model of a two-machine system as a way of opening the discussion into the more complex model studied in this thesis. The simplest Markov chain models for performance analysis of manufacturing systems assume two machines upstream and downstream with a buffer in between for storage of units that are waiting to be processed. Figure 3-1, reproduced from [4] shows a schematic model of this set-up. Figure 3-1- A schematic model of the two machine one buffer manufacturing system The machines are assumed to have states related to their stochastic breakdown structures. These states are generally assumed to be of the operational or failed types; also called operational and non-operational or up and down. In the simplest case, each 23

43 machine has two states: one operational and one failed. Transition between these states is marked either by probabilities in the discrete-time case or rates in the continuous time case. Figure 3-2 shows a schematic representation of the states of the two-state system where the operational state is denoted by a 1 and the failed state by 0. The arrows represent possible transitions between the states and the values on the arrows represent the probabilities of transition between the states. Figure 3-2- A simple two-state Markov chain model of a machine At any point in time, system state consists of the states of the two machines and the number of parts in the buffer. This is represented as the three tuple (n, α 1, α 2 ) where n represents the number of parts in the buffer, α 1 is the state of the upstream machine and α 2 is the state of the downstream machine. State transition equations are developed from the laws of total probability [42], keeping in mind possible transitions between machine states and the probabilities of those transitions, as well as buffer capacity. To mathematically express discrete-time Markovian property, it is assumed that the system state is a discrete random variable that takes a finite number of values related to the size 24

44 of the buffer and the states of the machines. A stochastic process {X t } is said to have the Markovian property if [43]: P{X t+1 = j X 0 = k 0, X 1 = k 1,, X t 1 = k t 1, X t = i} = P{X t+1 = j X t = i} 3.1 for t = 0,1, and every sequence i, j, k 0, k 1,, k t 1. This means that the conditional probability of any future event given the current state and a set of past events is independent of the past events and only depends on the current state. The conditional probability on the right hand side of equation 3.1 is shorthanded P{X t+1 = j X t = i} = p ij and referred to as a one-step transition probability. The conditional probability P{X t+n = j X t = i} = p n ij is called n-step transition probability. Transition probabilities can be conveniently assembled in matrix form where the generally applied convention is that each row and column is associated with a state and the element in a row and column represents the probability of transition from a row state to a column state. The one-step matrix of transition probabilities is called the transition matrix throughout this thesis. States in a Markov chain have been classified according to the properties of returning to that state once the system enters it. Of the different types of states, the models introduced here will frequently use the transient state, which is defined as a state that upon being entered, the system may never return to it again [43]. For an irreducible ergodic Markov chain, the steadystate probability of a transient state is shown to be zero. For an ergodic, irreducible Markov chain it has also been shown that the n-step transition probability approaches values independent of the starting state as n becomes very large. In mathematical notation, if an n-step transition probability is denoted p n ij, this is written as [43]: lim n p ij n = π j >

45 Where π j are the steady-state probabilities of the Markov chain and found from the steady-state equations: M π j = π i p ij for j = 0,1,, M 3.3 i=0 Since the system must be at some state in steady-state, M π j = j=0 Equations 3.3 and 3.4 form the basis of most solutions for the steady-state probability distribution of a Markov chain: formulating a system of linear equations with the M + 1 unknown steady-state probabilities using equation 3.3 and forcing the solution to be a unique probability distribution by replacing any one of these equations with equation 3.4 or augmenting the steady-state transition matrix with 3.4. This traditional solution methodology is straightforward, but as mentioned earlier, prone to inefficiency if the state-space of the Markov chain is large. Therefore, manufacturing systems performance literature suggests an alternative analytical solution in the form of a multiplication of constants that are found analytically or, as will be shown later in this thesis, from a much smaller set of linear equations An introduction to machines with multiple failure modes In contrast to the basic two-machine model of the previous section, a more complicated model is one where the machines have more than one failed state. These models, which form the focus of this thesis, are referred to throughout this text as having multiple failure modes and, as will be shown in this and subsequent chapters, can have various degrees of complexity of transitions. The concept of multiple failure modes was introduced into the analytical Markov 26

46 chain models of manufacturing systems to simulate practical manufacturing situations. Multiple failure modes could represent different types of machine failures, e.g. tool or power failure, or failures that occur with varying frequency. The structure and probabilities of transitions between the multiple failed states can also be used so that the time between failures represents a probability distribution, e.g., Poisson distribution, etc. Specific interpretations of the multiple failure modes will be discussed in later sections. 3.2 Two machine model with multiple parallel failures The two machine model with multiple parallel failure states has a single operational state with multiple failed states and transitions between the operational state and each of the failed states. A schematic representation is shown in Figure 3-3. This model is the focus of [15], where the authors extend the analytical solution for the two-state two machine manufacturing model to a case with more than one failure state. The solution they developed is purely analytical, is independent of the buffer size, and relies on finding the real roots of a characteristic polynomial. Reference will be made to this model throughout this thesis. Figure 3-3- Markov chain model of a two machine system with multiple parallel failure states 27

47 3.3 Two machine model with multiple sequential failures The remainder of this chapter will focus on the development of models to represent a system operational configuration, transition equations, a solution methodology, and solved numerical examples for a two machine system with multiple sequential failure states. The following section introduces the model and the notation used for its description, while Section 3.4 explains the solution methodology in full detail, and Section 3.5 produces examples of problems solved Model and notation The two machine model with sequential failure modes has a single operational state and multiple failed states which lead to a final repair state that may return the machine to the operational state. The upstream machine is designated with u and the downstream machine with a d. Figure 3-4 illustrates the Markov chain of the multiple sequential failure system s machine states. The operational state is represented by a 1, the upstream machine has s failed states, denoted by the subscript i = 1,, s where the final down state u s represents the repair state back to the operational state. The downstream machine has t failed states, denoted by subscript j = 1,, t where d t is the repair state. In the three-tuple of system state representation, (n, α 1, α 2 ), α 1 = 1 or α 1 = u i indicates the upstream machine is in its operational state or ith non-operational state, while α 2 = 1 or α 2 = d j indicates the downstream machine is in its operational state or jth non-operational state. The failure transition probabilities between the operational state and the first non-operational state are denoted p u i and p d j for the upstream and downstream machines, respectively. Machines state transitions are unidirectional through the 28

48 non-operational states. When the machines are in their final non-operational state u s or d t, they are either repaired back to production state 1, or remain in the final failed state, as transition to other failed states is not possible. One example interpretation of this machine state system is if each machine has two non-operational states, the first representing a wait state, where the machine waits for repair resources to become available, and the second non-operational state represents a repair state for when repair resources have been assigned to it and it is undergoing repairs. Figure 3-4- Markov chain of machine states and transition probabilities for upstream and downstream machines in a two machine system with multiple sequential failure modes All Markov-chain based two machine manufacturing system models require assumptions to be made in order for the problem to be analytically solvable. These are identical to those found in [4] and [15] and include: Parts enter the first machine to be processed from an infinite supply point upstream of the line. They are then stored temporarily in the buffer and subsequently transferred to the second machine for further processing. The completed parts exit into an infinite storage downstream of the line. Each machine can be at only one state at a given time. Operation-dependent failures. In the performance analysis of manufacturing systems machine failures have been classified as being either operation-dependent or time- 29

49 dependent. Operation-dependent failures occur as a function of the number of cycles of operation since the last repair, whereas time-dependent failures depend on the time elapsed since the last repair. Operation-dependent failures do not occur while the machine is idle but time-dependent failures occur independently of machine status. Tool breakage is an example of an operation dependent failure and power failures are time dependent. Both operation-dependent failures and time-dependent failures have been addressed in the literature, but the models within which they are assumed are different. Operation-dependent failures have been shown empirically to be more prevalent [3], but time-dependent failure models are easier to solve analytically. One comparison of the frequency of operation-dependent and timedependent failures is reported in [44]. Machine failure and repairs occur at the beginning of time periods. Parts are added to or taken from the buffer at the end of each time period. When the buffer is empty, the downstream machine is starved ; i.e., it cannot process a part. When the buffer is full (at the end of the previous time-cycle, as assumed above) and the downstream machine does not take a part out of the buffer at the beginning of the current time-cycle, the upstream machine is blocked. This has been referred to as the blocked-before-service convention [5]. The upstream machine is never starved (infinite upstream supply) and the downstream machine is never blocked (infinite downstream storage). Parts are not scrapped or taken out of the line at any state. When a machine fails, the part it was working on is returned to upstream storage. The machine will continue processing the part after it is repaired. 30

50 The machines are synchronous, meaning that they have the same processing times. Cycle time is also equal to this processing time Transition equations Following the methodology introduced in [4], the Markov chain states (n, α 1, α 2 ) have been divided into internal equations where the buffer level, n, is between 2 and N 2, lower boundary equations, where the buffer level is 0 or 1, and the upper boundary equations, where the buffer level is either N 1 or N. On the lower and upper boundary side, transient states are listed in Table 3-1. Lower boundary Table 3-1- Transient states Upper boundary (0,1, d j ); 1 j t (N, u i, 1); 1 i s (0,1,1) (N, 1,1) (0, u i, d j ); 1 i s, 1 j t (N, u i, d j ); 1 i s, 1 j t (1,1, d j ); 1 j t (N 1, u i, 1); 1 i s The steady-state transition equations are found based on equation 3.3 and the possibility of transition between the Markov chain states. Transient states are not included in these equations as their steady-state probability tends to zero. It must also be noted that throughout this thesis any transition matrix element represents a transition from a Markov chain column state to a row state. The transition equations are listed in Appendix A, along with a discussion on how starvation and blockage assumptions affect these equations. 31

51 3.4 Solution methodology The solution concept is identical to methods used in [4] and [15]. An analytical assumption is made for the internal probabilities taking the form of a linear combination of internal states. This is then replaced back into the steady-state equations to satisfy both internal and boundary equations. s+t p(n, u i, d j ) = C m X n m U i,m D j,m, i = 1,, s, j = 1,, t, 2 n N m=1 The variables C, X, U, D are found by replacing equation 3.5 back into the steady-state equations. The shape and form of the transitions in the Markov chain structure determines the means by which these variables are found and the complexity and difficulty of solving the resulting equations. The remainder of this section explains this solution process Initial Solution to internal equations It is initially assumed that the internal probabilities take the following from. p(n, 1, d j ) = X n D j where 3.6a p(n, u i, d j ) = X n U i D j p(n, 1,1) = X n p(n, u i, 1) = X n U i 1 i s 3.6b 1 j t 3.6c 2 n N 2 3.6d This initial assumption of internal steady-state probabilities is substituted in the transition equations of Table A-2 in Appendix A. A process of simplification and substitution is then applied which is detailed in Appendix B. This results in the introduction of the constant K and the fundamental equation

52 F(K) = [(1 p u p 1) u 1 s m=2(p u m 1m) r u s + s 1 m=1 (K 1 + p u mm+1) (K 1 + r u s) ] [(1 pd 1) p d 1 tm=2 (p d m 1m) r d t + t ( 1 K 1 + pd mm+1 m=1 ) ( 1 ] 1 = 0 K 1 + rd t ) 3.7 Referred to as the characteristic polynomial equation, equation 3.7 is a polynomial equation in K of order s + t. As such, it will have s + t (not necessarily distinct or real) roots. Having solved equation 3.7, K m ; 1 m s + t is defined as the mth root of the polynomial equation and the variables U i, D j, X updated in equations 3.8, 3.9 and 3.10 to account for the multiple K s. U 1,m = p u 1 [K m 1 + p u 12] p u 1 i m=2(p u m 1m) U i,m = i (K m 1 + p u mm+1) m=1 3.8a 2 i s 1 3.8b U s,m = s 1 m=1 p u 1 s m=2(p u m 1m) (K m 1 + p u mm+1) (K m 1 + r u s) 3.8c D 1,m = p d 1 [ 1 K m 1 + p d 12 ] 3.9a p d 1 m=2(p d m 1m) D j,m = j ( p K d mm+1 m=1 ) m j 2 j t 1 3.9b p d 1 t m=2(p d m 1m) D t,m = ( p K d mm+1 tm=1 ) ( 1 3.9c 1 + r m K d t ) m 33

53 X m = 1 K m [(1 p u 1) + p u 1 s m=2(p u m 1m) r u s s 1 m=1 (K m 1 + p u mm+1) (K m 1 + r u s) ] Solution to the characteristic polynomial equation To find the roots of the characteristic polynomial, its special structure is taken advantage of. To illustrate, an example plot of the characteristic polynomial for a system with four parallel upstream failed modes and four downstream parallel failed modes is shown in Figure 3-5. As can be seen from Figure 3-5, F(K) is bracketed between s + t points of singularity where the function value approaches infinity. These are referred to as poles of the polynomial and are easily seen to be found from equations 3.11a to 3.11d as follows. In total, there are s poles related to the upstream machine probabilities which are always smaller than 1 and t poles related to the downstream machine which are always larger than 1. These are referred to as the upstream poles and downstream poles, correspondingly. K = 1 p u ii+1; 1 i s 1 (K = 1 r u s) 3.11a 3.11b 1 K = 1 p d ; 1 j t 1 jj c K = 1 1 r d t 3.11d 34

54 Figure 3-5- Example plot of the characteristic polynomial with four upstream failed states and four downstream failed states An analysis of the polynomial shows that the number of poles determines the orientation of the plot, as well as the existence of real roots in the extreme regions outside of the first upstream pole and last downstream pole. It can be verified that when K approaches zero from the right, F(K) asymptotically approaches a negative value. If the upstream machine has an even number of poles, which corresponds to an even number of failed states, when K approaches the first upstream pole from left, F(K) approaches positive infinity. As shown in Figure 3-5, this means that a real root exists in the interval between 0 and the first upstream pole. With an even number of upstream poles, in the interval immediately after the smallest upstream pole, F(K) is convex-shaped, going from negative infinity to negative infinity. When following an interval between upstream poles, F(K) alternates between positive infinity to positive infinity and negative infinity to negative infinity. 35

55 If, on the other hand, the upstream machine has an odd number of poles, when K approaches the first upstream pole from the left, F(K) goes to negative infinity and there will be no real roots in the interval between 0 and the first upstream pole. The first interval after the upstream pole is marked by a concave F(K) going from positive infinity to positive infinity. This is shown in Figure 3-6, an example system with 5 upstream failed modes. The remainder of intervals smaller than 1 are marked consecutively by the polynomial function F(K) going from negative infinity to negative infinity and positive infinity to positive infinity. Figure 3-6- Example plot of the characteristic polynomial with five upstream failed states and five downstream failed states Therefore, regardless of the number of poles, when approaching the last pole of upstream machine from the right, F(K) is always at negative infinity and the interval between the last upstream pole which is always smaller than 1 and the first downstream pole which is always larger than 1, is marked by F(K) going from positive infinity to positive infinity. It has been observed that F(K) always has two real roots in this interval, one of which is always equal 36

56 to 1. That K = 1 is always a root of F(K) can be easily verified from equation 3.7. If the machines are identical, F(K) will have a double root at K = 1. The intervals between consecutive downstream poles have F(K) going alternately from negative infinity to negative infinity and from positive infinity to positive infinity. With an even number of downstream poles, as K approaches the last downstream pole from right, F(K) is at positive infinity as shown in Figure 3-5. Similar to when K approaches zero, it can be verified that when K is larger than the largest downstream pole, F(K) asymptotically approaches a negative value. Again, this indicates that F(K) crosses the horizontal axis somewhere in this interval and that there is a real root. With an odd number of poles, as K approaches the last downstream pole from the right, F(K) is at negative infinity, as shown in Figure 3-6. This indicates that F(K) does not cross the horizontal axis and therefore there is no real root in this interval. Real roots were frequently observed in the interval between the first and second poles of the upstream machine, as well as second to last and last poles of the downstream machine, although this is not always the case. In other intervals, no real roots were observed. Since the polynomial is of degree s + t, the rest of the roots must be complex and occur in conjugate pairs because the coefficients of the polynomial are real. Table 3-2 summarizes the intervals identified in the plot of the characteristic polynomial of equation 3.7 and the number of real roots in those intervals. 37

57 Table 3-2- Location and existence of real roots for a system with multiple sequential failure modes Number of upstream poles Even Even Odd Odd Number of downstream poles Even Odd Even Odd 0 and 1 st upstream pole One real root One real root No real root No real root Between upstream poles Occasional real roots Occasional real roots Occasional real roots Occasional real roots Interval Last upstream, first downstream pole Two (sometimes identical) real roots Two (sometimes identical) real roots Two (sometimes identical) real roots Two (sometimes identical) real roots Between downstream poles Occasional real roots Occasional real roots Occasional real roots Occasional real roots After largest downstream pole One real root No real root One real root No real root Taking advantage of the special structure demonstrated in Figure 3-5 and Figure 3-6, as well as the information in Table 3-2, identifying the real roots of the polynomial is relatively straightforward using a bracketing method such as the bisection method of interval halving [45]. In intervals outside the poles, where one real root is certain to occur, two points are found close to the end points of the interval and the distance between them is systematically shrunk using bisection until the function value F(K) becomes sufficiently small or until the difference between consecutive approximations becomes very small. As for the two roots confined between the last upstream pole and first downstream pole, it was shown that one of the roots is always equal to K = 1. Whether the other real root in this interval is located to the right or to the left of 1 can be determined by simply checking the function value at some point a small increment 38

58 larger than 1. If the function value F(K) at this test point is greater than zero, the other real root lies to the left of 1, otherwise the root lies to the right of 1. Depending on where the root lies, it can then be found by implementing the bisection method between 1 and the corresponding pole to the left or right. To determine whether there is an axis crossing and therefore two real roots in intervals between the upstream poles and between downstream poles, the structure of the polynomial is again drawn upon. In intervals between consecutive upstream poles and consecutive downstream poles, the polynomial is unimodal [46] with a maximum or a minimum depending on the number of poles. This allows finding the extremum using region elimination methods such as the Golden Section search technique [46]. This technique systematically narrows the region in which the maximum or minimum point is located until convergence is reached. The function value at the extremum will determine if an axis crossing has occurred. If there is an axis crossing, running the bisection method in the interval between the pole to the left and the maximum or minimum as well in the interval between the maximum or minimum and the pole to the right will find the real roots. To find the complex roots of the characteristic polynomial, Muller s Method [45] was found to be the most effective. Muller s method starts with three initial values, x 0, x 1, x 2 for instance, on the function and constructs a parabola that runs through these three points. Of the two intersections of the parabola with the x-axis, the one closest to the last initial value is called x 3 and taken as an approximation to the root. On the next iteration, a new parabola is constructed with x 1, x 2, x 3 and the algorithm repeated until stopping criteria are encountered. The choice of Muller s method followed from its ability to generate complex points due to the parabolic approximation. The structure of the characteristic polynomial with the function flanked by poles 39

59 and an extremum point in the middle was also taken advantage of in implementation. The complex roots algorithm starts by creating vectors of initial values for Muller s method. The first vector of initial points includes 0, a point just inside the upstream pole by a small tolerance (to avoid function diverging to infinity) and the midpoint between these two values. Between every two upstream or downstream poles, the vector of initial values includes points inside the poles by a small tolerance to avoid a singular function and the extremum (minimum or maximum) found when detecting real roots. Between the last upstream pole and the first downstream pole, the vector of initial values includes two points larger than the last upstream pole and smaller than the first downstream pole by a small tolerance and 1. Outside the last downstream pole, the vector of initial values includes a point just outside the last downstream pole by a small tolerance and two points larger than the last downstream pole. Muller s method is then run through every vector of initial values until all roots are found, taking note of the fact that complex roots occur in conjugate pairs and that the number of roots is known. Running Muller s method is likely to generate points that are not roots and real roots that have already been found. Simple checks of the function values and comparison with previous roots are used to discard the undesired results. The following steps summarize the procedure to find the roots of the characteristic polynomial. Step 1. If there is an even number of upstream poles, find the root between zero and the first upstream pole. Step 2. In all intervals between upstream poles identify axis crossings using the Golden Section search technique. If an axis crossing is detected, find the roots in that interval. 40

60 Step 3. If the machines are non-identical, find one real root other than 1 located between the last upstream pole and first downstream pole using bisection method in the interval that contains the root as explained previously. Step 4. Step through the intervals defined by the downstream poles, identifying axis crossings. If detected, find the real roots an axis crossing will create. Step 5. If there is an even number of downstream poles, find the real root after the last downstream pole. Step 6. Check if all roots have been found. If yes, conclude the algorithm. If not continue. Step 7. Create vectors of starting points in all intervals identified using poles and points of extremum or poles and interior points. Run Muller s method using the vectors of starting points as initial values. Step 8. Verify results from Muller s algorithm, discard duplicates and unnecessary points, checking against roots already found and poles. Conclude the algorithm. These steps are summarized in the algorithm flowchart in Figure

61 Figure 3-7- Algorithm flowchart solving roots of characteristic polynomial Solution to the boundary equations and Coefficients After all the roots of the polynomials are found, back substitution in equations 3.8, 3.9 and 3.10 will find X m, U i,m, D j,m. Introducing the constants C m, the solution which is now in the form of equations 3.6a to 3.6d is rewritten in the form of equations 3.12a to 3.12d and the 42

62 constants C m are found such as to satisfy the boundary equations. The final internal state probabilities therefore have the following form. s+t p(n, 1, d j ) = C m X n m D j,m ; j = 1,, t, 2 n N 2 m=1 s+t p(n, u i, d j ) = C m X n m U i,m D j,m, i = 1,, s, j = 1,, t, 2 n N 2 m=1 s+t n p(n, 1,1) = C m X m ; 2 n N 2 m=1 s+t p(n, u i, 1) = C m X n m U i,m ; i = 1,, s, 2 n N 2 m=1 3.12a 3.12b 3.12c 3.12d To find the constants C m, the multiple parallel failure system studied in [15] used an analytical approach which manipulated boundary equations to develop simple expressions for the boundary probability equations. However for the sequential failed states model studied here, applying the same analytical approach does not lead to a simplified expression for the boundary probabilities. Therefore an alternative numerical procedure is developed involving the numerical solution of a system of linear equations that finds C m and the boundary state probabilities simultaneously. The process for finding equations that constitute the system of equations is detailed in Appendix C. To develop the system of linear equations, the boundary probabilities and constants C m are arranged in the following vector. The size of each element is included as a subscript. 43

63 [p(0, u i, 1)] s 1 [p(1, u i, d j )] st 1 p(1,1,1) [p(1, u i, 1)] s 1 [C m ] [p(n 1,1, d j )] t 1 [p(n 1, u i, d j )] st 1 [ [p(n 1,1,1)] st 1 [p(n 1,1, d j )] t 1 ] To find the matrix of coefficients, the expressions for p(0, u i, 1), p(1, u i, d j ), p(1,1,1) on the lower boundary and p(n 1, u i, d j ), p(n 1,1,1), p(n 1,1, d j ) on the upper boundary are taken from Table A-1 and Table A-3 in Appendix A. p(1, u i, 1) and p(n 1,1, d j ) are expressed in terms of C m based on Appendix C. The s + t constants C m are expressed in terms of both upper boundary and lower boundary probabilities: C j ; j = 1,, t are expressed in terms of upper boundary state probabilities and the remaining C i ; i = t + 1, t + s in terms of lower boundary probabilities. In mathematical form, this system of linear equations can be written as: [p(0, u i, 1)] s 1 [p(0, u i, 1)] s 1 [L] (s+st+1) (2s+st+1) [0] (s+st+1) (s+3t+st+1) [0] s (2s+st+1) [X m U i,m ] s (s+t) [0] s (2t+st+1) [0] (s+t) s [C] (s+t) (2st+2s+2t+2) [0] (s+t) t [0] t (2s+st+1) [X m N 1 D j,m ] t (s+t) [0] t (2t+st+1) [[0] (st+1+t) (3s+st+t+1) [U] (st+1+t) (2t+st+1) ] 44 [p(1, u i, d j )] st 1 p(1,1,1) [p(1, u i, 1)] s 1 [C m ] [p(n 1,1, d j )] t 1 [p(n 1, u i, d j )] st 1 [p(n 1,1,1)] [ [p(n 1,1, d j )] t 1 ] = [p(1, u i, d j )] st 1 p(1,1,1) [p(1, u i, 1)] s 1 [C m ] [p(n 1,1, d j )] t 1 [p(n 1, u i, d j )] st 1 [p(n 1,1,1)] [ [p(n 1,1, d j )] t 1 ] where L is a matrix of lower boundary equation coefficients, U is a matrix of upper boundary equation coefficients, C is a matrix of internal equation coefficients associated with the constants C m, 0 is a matrix of zeros, etc. The size of each matrix or vector in equation 3.13 is given as a subscript. As a Markov transition matrix, the system in equation 3.13 is not independent as one 3.13

64 equation is a linear combination of the others. However, the system can be made independent by replacing any one equation with a normalization equation that forces the sum of all state probabilities to one. This normalization equation keeps all the boundary equations but replaces all internal state equations with expressions in terms of C m, using equations 3.12a to 3.12d When the internal equations are expressed in terms of the constants, the normalization equation is written as s s t s N 2 t s+t p(0, u i, 1) + p(1, u i, d j ) + p(1,1,1) + p(1, u i, 1) + C m X n m D j,m i=1 i=1 j=1 i=1 n=2 j=1 m=1 N 2 s t s+t N 2 s+t N 2 s s+t + C m X n n m U i,m D j,m + C m X m + C m X n m U i,m n=2 i=1 j=1 m=1 n=2 m=1 n=2 i=1 m=1 t s t + p(n 1,1, d j ) + p(n 1, u i, d j ) + p(n 1,1,1) j=1 i=1 j= t + p(n, 1, d j ) = 1 j=1 Any one row of the matrix of coefficients 3.13 can be replaced by the normalization equation 3.14 with corresponding matrix row equation adjusted accordingly. A simple numerical procedure for the solution of the systems of linear equations will then produce the constants C m and the boundary probabilities. C m are subsequently used in equations 3.12a to 3.12d to find the steady state probabilities of the internal states of the system Performance measures Using the solution to the steady-state probabilities, different performance measures of the system can be drawn in steady state. 45

65 1. Average buffer level This is a measure of how many parts will be in the buffer on average in the steady state of system operation; [4] and [15]. It can be used for the design of a production line. N t N s t N n = n p(n, 1, d j ) + n p(n, u i, d j ) + n p(n, 1,1) n=0 j=1 n=0 i=1 j=1 n=0 N 2 s + n p(n, u i, 1) n=2 i= Steady-state production rate (throughput) This is defined as the number of parts the system will produce per cycle in the steady state of operation. By definition, it will be less than one, since the discrete time model assumes production cycle to be synced to processing times and accounts for machine failures. In terms of the steady-state probabilities, it is the sum of the steady-state probabilities of having at least one part in the buffer and the downstream machine operational. N s N E = p(n, u i, 1) + p(n, 1,1) n=1 i=1 n= Probability of starvation and blockage These are correspondingly the probabilities of having no part in the buffer, an operational downstream machine and a non-operational upstream machine; and having a full buffer, an operational upstream machine and non-operational downstream machine. s P starv = p(0, u i, 1) i=

66 t P block = p(n, 1, d j ) j= Solution algorithm The solution process can be summarized in the following algorithm: Step 1. Find the roots K m of the characteristic polynomial 3.7 using the numerical procedure outlined in Section and summarized in Figure 3-7. Step 2. Find U i,m, D j,m, X m according to equations 3.8, 3.9 and 3.10 respectively. Step 3. Find constants C m and boundary state probabilities from equation 3.13 with normalization equation Step 4. Find steady-state probabilities from equations 3.12a to 3.12d. Step 5. Find performance measures as required from Section Numerical results This section demonstrates examples of the application of the solution methodology of Section 3.4 to systems with multiple sequential failures. The results demonstrated include the results of the solution of the roots of the polynomial equation and the subsequent solution to the steady-state probabilities, as well as some example results on performance measures. Four examples are shown to demonstrate the behavior of the polynomial equation 3.7 with all possible combinations of odd and even numbers of poles. In all these examples, it is shown that the polynomial has some complex roots, leading to some complex C m, X m, U i,m, D j,m. However, the application of the solution procedure results in real boundary state probabilities and real internal state probabilities. 47

67 In order to verify the results, the steady-state probabilities from the analytical methods of Section 3.4 are compared with the results from the purely numerical traditional method of solving for steady-state probabilities of Markov chains explained in Section To reiterate, a purely numerical solution is not desirable with a large buffer which makes the state space of the system prohibitively large or when the steady-state probabilities solution is to be implemented in an iterative scheme that solves for a system with multiple machines using the two-machine solution as a building block. In some examples below, steady-state probabilities are put in a matrix as follows and displayed as bar plots to make comparison simple. In other examples, comparison is made not directly between probabilities but between performance measures found from the analytical and direct Markov chain solutions. A third type of comparison is through the use of Pareto charts to demonstrate the differences between the two sets of probabilities found from analytical and numerical solutions. System with four upstream and four downstream poles The Markov chain transition probabilities for this system are shown in Table 3-3. The notation is the same as in Figure 3-4. The probabilities not included in the table are those of complement events. The repair probabilities of Table 3-3 are much larger than failure probabilities to be compatible with the characteristics of a real manufacturing system. 48

68 Table 3-3- Transition probabilities for a system with multiple sequential failures with four upstream and four downstream failed states Upstream machine Failure Probability Downstream machine Failure probability p u 0.09 p d 0.01 Failed State Transition Probabilities p u p d p u p d p u p d Repair probability Repair probability r u 0.8 r d 0.89 Buffer capacity 10 As was shown previously the characteristic polynomial for this example system has 8 roots, four of them real and located in the intervals shown in Table 3-2; two real roots outside the extreme poles and two real roots between the last upstream pole and first downstream pole. The remaining four roots are complex conjugate pairs. The roots and the poles of the polynomials are summarized in Table 3-4. Having proceeded with the calculation of C m, X m, U i,m, D j,m and boundary probabilities as per the procedures of Section 3.4, the remaining steady-state probabilities are found from equation 3.12a to 3.12d. Figure 3-8 shows these steady-state probabilities as a three-dimensional bar plot of the matrix P from equation Error! Reference ource not found.. p(n, 1, d j ) p(n, u i, d j ) p(n, 1,1) p(n, u i, 1) n = 0 P = n = 1 n = 2 [ n = N 1 n = N ]

69 The Markov chain states are represented on the x-axis, buffer content is on the y-axis and the z-axis shows the steady-state probabilities. Figure 3-9 shows the numerical Markov chain solution to the same steady-state and Figure 3-10 depicts the absolute percentage difference between the two sets of probabilities, showing an accuracy to the order of Table 3-4- Poles and roots of characteristic polynomial Characteristic polynomial Poles Characteristic polynomial roots Upstream Real Downstream Complex i i i i The steady-state probabilities plotted in Figure 3-8 and Figure 3-9 are used to calculate the Markov chain system performance measures as formulated in Section These performance measures are listed and compared in Table 3-5 for the two methods. The steadystate performance measures in Table 3-5 reflect the choices of transition probabilities in Table 3-3. When the failed states transition probabilities are small, once the system fails, it takes a relatively long time to transition forward to the final failed state and back to the operational state. This means the system will spend the majority of time in the failed states with no parts added to the buffer. This can be seen in Figure 3-8 and Figure 3-9 where the largest probabilities are those with empty buffers and in Table 3-5 where average production rate is very small, 50

70 average buffer level is rather close to zero, probability of starvation is large and probability of blockage very small. Figure 3-8- Steady-state probabilities calculated from the analytical method Figure 3-9- Steady-state probabilities calculated from a direct numerical Markov chain solution 51

71 Figure Percentage of absolute difference in probabilities calculated from analytical and numerical methods Table 3-5- Comparison of analytical and numerical performance measures Analytical solution Direct Markov chain solution Percent error Average production rate E-04 Average buffer level E-04 Probability of starvation E-05 Probability of blockage E System with four upstream and five downstream poles The Markov chain transition probabilities for this system are listed in Table 3-6. The location of the real roots follow the expectation from Table 3-2: there is one real root between 0 and first upstream pole and two real roots between the last upstream pole and first downstream pole one of which is equal to 1. There are no real roots between the upstream poles or downstream poles. The remaining six roots are three pairs of complex conjugates. These roots 52

72 and the poles of the characteristic polynomial are listed in Table 3-6. Figure 3-11 shows a Pareto chart of the maximum difference across all buffer levels between steady state probabilities from the analytical method and the direct numerical solution of the Markov chain. The horizontal axis represents the Markov chain states with the greatest amount of error between the two methods, the left vertical axis shows the values of errors and the right vertical axis represents the cumulative percentage of error. Bars represent values and the line represents the cumulative total error up to 99%. The chart confirms that the analytical and numerical methods find the same probabilities to considerable accuracy. Table 3-7 shows the real and complex roots of the polynomial, confirming the expectation from Table 3-4. The three pairs of complex-conjugate roots were found using the Muller s method as explained in Section Table 3-6- Transition probabilities for a system with four upstream and five downstream failed states Upstream machine Failure Probability Downstream machine Failure probability p u 0.09 p d 0.01 Failed State Transition Probabilities p u p d p u p d p u Repair probability p d p d Repair probability r u 0.8 r d 0.89 Buffer capacity 10 53

73 Table 3-7- Roots and poles of characteristic polynomial Characteristic polynomial Poles Upstream Characteristic polynomial roots Real Complex Downstream i i i i i i As seen in Table 3-8, the buffer in the example system here is again mostly empty, partly because the failure probability of the downstream machine shown in Table 3-6 is smaller than the failure probability of the upstream machine, but mostly because the downstream repair probability is larger than the upstream repair probability. This means that the downstream machine will be operational more than the upstream machine, taking parts out of the buffer faster than the upstream machine can replenish. This also explains the relatively high probability of starvation and the very small probability of blockage. Production rate on the other hand is much larger here than the system of Table 3-5 which is due to the larger failed state transition probabilities. Once a machine fails, the larger failed state probabilities ensure it transitions comparatively faster through the failed states and is repaired more quickly. 54

74 Figure Pareto chart of maximum differences between analytical method and direct Markov chain solution for the system of Table 3-6 Table 3-8- Comparison of analytical and numerical performance measures for the system of Table 3-6 Analytical solution Direct Markov chain solution Percent error Average production rate E-04 Average buffer level E-03 Probability of starvation E-03 Probability of blockage E System with five upstream and four downstream poles Markov chain transition probabilities for an example system with five upstream and four downstream poles are listed in Table 3-9. Table 3-10 shows the location of the upstream and 55

75 downstream poles and the real and complex roots of the characteristic polynomial. The location of the real roots is as predicted in Table 3-2; two real roots between largest upstream and smallest downstream poles, and one root outside the last downstream pole. The remaining roots are complex and occur in conjugate pairs. Figure 3-12 shows a bar plot of the steady-state probabilities found from the analytical method presented here. Figure 3-13 is a Pareto chart of the difference between the analytical method and the direct numerical solution of the Markov chain equations which shows accuracy to Table 3-11 represents a comparison of performance measures which confirms the accuracy of the solution. Comparing upstream and downstream repair and failure probabilities in Table 3-9, it can be seen that the downstream machine fails more frequently and is repaired less frequently that the upstream machine. This explains the rather low production rate and the rather high average buffer, at 70% of buffer capacity, as parts seem to accumulate in the buffer rather than flow out of the system. The probability of starvation is very small with the system spending just about 5% of time in a blocked state and the probability of blockage is quite high with the system spending almost 40% of the time in a blocked state. 56

76 Table 3-9- System probabilities for a system with five upstream and four downstream failed states Upstream machine Downstream machine Failure Probability Failure probability p u p d 0.16 Failed State Transition Probabilities p u p d p u p d p u p u p d Repair probability Repair probability r u 0.94 r d 0.64 Buffer capacity 10 Table Roots and poles of characteristic polynomial Characteristic polynomial Poles Characteristic polynomial roots Upstream Real Complex i Downstream i i i i i 57

77 Table Analytical and numerically found performance measures for the system of Table 3-9 Analytical solution Direct Markov chain solution Percent error Average production rate E-07 Average buffer level E-05 Probability of starvation E-04 Probability of blockage E-05 Figure Steady-state probabilities calculated analytically for the system of Table

78 Figure Pareto chart of the difference between the analytical method and direct numerical Markov chain solution System with five identical upstream and downstream failed states This example highlights the problem of identical machines. When the upstream and downstream machines have identical probabilities, the characteristic equation will have a repeated root at 1. This will create a boundary probabilities linear system of equations, equation 3.13, which is over-determined. The solution, it was found, is either to remove one of the equations in 3.13 or include 1 + ε and 1 ε as two roots where ε is a small number. This will ensure linear independence of the boundary equations. One of the C s associated with the repeated root at 1 will be zero, as is the case with all C s associated with the root A = 1 in all non-identical machines. In the example here, both machines have an odd number of failed states where, according to Table 3-2, there will be no real roots outside the poles, only occasional roots between pairs of upstream and downstream pole and two identical roots between last upstream pole and first 59

79 downstream pole. Table 3-12 confirms this. The only real root is at 1. Also negative complex roots were observed here. Negative roots were seen to occur in some other example problems as well. The complex roots algorithm in Section did not face problems detecting these roots. Steady-state probabilities from the analytical solution are shown in Figure It is seen that the probabilities are symmetrical along the half mark buffer line at n = 5 with the boundary probabilities symmetrical, for example p(0, u i, 1) = p(n, 1, d j ), etc. The Pareto chart comparison in Figure 3-15 of probabilities from analytical and direct Markov chain solutions confirms the accuracy of the analytical solution. Performance measures are compared in Table The average buffer level at half the buffer capacity of 10 and equal blockage and starvation probabilities follow from the symmetry of steady-state probabilities. Table Transition probabilities for a system with five identical failed states Upstream machine Failure Probability Downstream machine Failure probability p u p d Failed State Transition Probabilities p u p d p u p d p u p d p u p d Repair probability Repair probability r u 0.9 r d 0.9 Buffer capacity 10 60

80 Table Roots and poles of characteristic polynomial Characteristic polynomial Poles Characteristic polynomial roots Upstream Real Complex i i i Downstream i i i i i 10 61

81 Figure Steady-state probabilities calculated analytically for the system of Table 3-12 Figure Steady-state probabilities calculated numerically for the problem of Table

82 Table Analytical and direct solution performance measures for the system of Table 3-12 Analytical solution Direct Markov chain solution Percent error Average production rate E-04 Average buffer level E-03 Probability of starvation E-03 Probability of blockage E Concluding remarks This chapter studied a Markov chain model of a manufacturing system with two machines and a finite capacity buffer in discrete time with random failures. The machines had a multiple non-operational states structure and sequential transitions between these nonoperational states. Viewed from a manufacturing systems perspective, a methodology was developed to find the system s steady-state probabilities independent of buffer capacity and faster than a direct Markov chain solution when implemented for large buffers or as part of recursive algorithms. The solution process was shown to involve finding real and complex roots of a characteristic polynomial. Effective methodologies were developed for both types of roots. Derivation of performance measures was demonstrated and a range of examples was studied to show different possibilities for the manufacturing system. The next chapter will build on these methods and insights from the solution in this chapter to extend the model here to a system which has multiple non-operational states but any kind of transition between the non-operational states. 63

83 Chapter Four: Generalized Solution Methodology for Steady State Probabilities of Two- Machine, One Buffer Problems with Multiple General Failures This chapter builds on and expands the multiple failures model of the previous chapter to include general transitions between failed states. The emphasis in this chapter is that transition is possible between any two states of the machines. This necessitates a different solution methodology from the one offered in Chapter 3. This model was originally intended in order to solve the steady-state probabilities for machine representing Poisson distributions of the nonoperational cycles. It was seen that the solution to this special case requires solving the more general case explained here, therefore this chapter focuses on the solution of the system with multiple general failures and explains how that solution is applied to special cases. The underlying assumptions, insights and notations are identical or similar to ones already stated in Chapter 3. The solution concept is the same: deriving an alternative analytical solution for the steady-state probabilities of a Markov chain, which is preferable to a direct solution of Markov chain equations when the state-space of the system is large or solution needs to be called recursively. Key differences, however, exist in the formulation of the characteristic equations. Numerical examples have been provided for both general and specific cases. The following sections develop the model, explain the solution methodology and algorithms, and illustrate it with numerical examples. 4.1 Model development This section explains the model and the notation used to develop the transition equations for the Markov chain. As was done previously, the upstream machine is designated with the letter u and the downstream machine with the letter d. In both upstream and downstream machines, the operational state is represented by a 1. The upstream machine has s non- 64

84 operational states, denoted by the subscript i = 1,, s, with the final non-operational state represented by u s. The downstream machine has t non-operational states, denoted by subscript j = 1,, t, where d t is the final non-operational state. A finite buffer separates the two machines to balance out the stochastic effects of machine breakdowns. The state of the system at any time-step (or time-cycle) is represented as the triple (n, α 1, α 2 ) where n = 1,, N is the buffer level, α 1 = 1 or α 1 = u i indicates the upstream machine is in its operational state or ith non-operational state, and α 2 = 1 or α 2 = d j indicates the downstream machine is in its operational state or jth non-operational state. Transition is possible between the operational state and any non-operational state; this is also referred to as a failure since the machine in question goes from an operational to a non-operational state. This failure probability from the operational to any non-operational state is denoted by p u i for the upstream machine or p d j for the downstream machine. Transition is also possible between any non-operational state and the operational state. This is similarly referred to as repair and its probability is similarly denoted r u i or r d j. Transition can occur between any two non-operational states. The probability of transition between non-operational states i and j of the upstream machine is designated p u ij. Likewise for the downstream machine, transition probabilities between non-operational states are designated p d ij. The system Markov chain model is shown in Figure 4-1. To avoid cluttering the figure, only sequential transitions between the non-operational states are shown. 65

85 Figure 4-1- Markov chain model of a two-machine system with multiple general failures 4.2 Steady-state equations This section develops the steady-state transition equations for the Markov chain model of Figure 4-1. Similar to Chapter 3 it is assumed that machines are operation-dependent and cannot fail if they are not working on a part. Repairs and failures occur at the beginning of the timecycle and buffer changes at the end of a time cycle. The machines are synchronous and production time is assumed to be one time-cycle. Conventions on blockage and starvation are identical to Chapter 3. The stationary transition equations have been divided into lower boundary states where the buffer level, n, is 0 or 1, internal where n is between 2 and N 2, and upper boundary equations, where the buffer level is either N 1 or N. Transient states are listed in Table 4-1. In order to simplify the transition equations, p u kk and p d ll are defined as the probabilities of remaining in the non-operational states k and l of the upstream and downstream machines respectively as follows. This allows several equations to be grouped together and simplified. s p u kk = 1 p u ki r u k i=1 i k 4.1a 66

86 t p d ll = 1 p d lj r d l j=1 j l 4.1b The complete steady-state transition equations are listed in Appendix D along with a discussion on how the assumptions regarding blockage and starvation are represented. Lower boundary Table 4-1- Transient states Upper boundary (0,1, d j ); 1 j t (N, u i, 1); 1 i s (0,1,1) (N, 1,1) (0, u i, d j ); 1 i s, 1 j t (N, u i, d j ); 1 i s, 1 j t (1,1, d j ); 1 j t (N 1, u i, 1); 1 i s Developing the solution for this model requires a matrix arrangement of probabilities. This is done based on [17], which offers a formulation for a similar problem with multiple operational and failed states. Vectors R u and R d are column vectors containing the probabilities of repair from any of the failed states; P u and P d are row vectors containing the probabilities of failure into any of the failed states; Y u and Y d are scalars representing the probabilities of remaining in the operational states and Z u and Z d are matrices containing the probabilities of transition between the failed states. Mathematical definitions follow in equations 4.2a to 4.2d. 67

87 r u 1 r u 2 R u = r u 3 [ r u s] r d 1 r d 2 R d = r d 3 [ r d t] 4.2a P u = [p u 1 p u 2 p u s] P d = [p d 1 p d 2 p d t ] 4.2b s t Y u = 1 p u i i=1 Y d = 1 p d j j=1 4.2c Z u = [ u u u p 11 p 12 p 1s u u u p 21 p 22 p 2s ] Z d = u u u p ss p s1 p s2 [ d p 11 d p 12 d p 1t d p 21 d p 22 d p 2t d d p tt p t1 p t2 d ] 4.2d The internal steady-state probabilities are also organized in vectors. Using notation from [17], these are denoted as P Y (n), P ΔΔ (n), P YY (n), and P Y (n) where n represents the number of parts in the buffer, Y in the superscript stands for an operational machine and in the superscript represents a non-operational machine. This results in a simplified format for the internal steady-state equations. The boundary equations cannot be put into this matrix format. The complete mathematical notation, as well as the matrix organization of internal equations, is detailed in Appendix E. 4.3 Solution methodology As in the previous case of multiple sequential failures, the steady-state probabilities of the Markov chain system can be solved using a purely numerical solution explained in Chapter 3. However, for large buffer capacities and larger numbers of states, as well as for extension to production lines with more than two machines which involves an iterative solution for the steady-state probabilities, the computational burden of such a numerical solution quickly becomes prohibitive. The alternative introduced in [4] and [15] and adopted here is to make an analytical assumption for the steady-state probabilities in the form of a linear combination of 68

88 multiplied variables. This analytical form for internal equations is represented in equation 4.3. An initial assumption is made that does not include the constants C. This initial assumption is then replaced back into the steady-state equations to satisfy both internal and boundary equations and in the process variables C, X, U, D are found. s+t p(n, u i, d j ) = C m X n m U i,m D j,m, i = 1,, s, j = 1,, t, 2 n N m=1 The solution in this chapter is developed using a matrix organization of variables which leads to a set of two simultaneous eigenvalue equations. This matrix organization and the process of development of the simultaneous eigenvalue equations are adopted from [17]. This section addresses the solution methodology. Section introduces the initial analytical assumption for the steady-state probabilities and finds a pair of eigenvalue equations the solution to which is key to the rest of the solution. Section explains in detail how the simultaneous eigenvalue equations are solved. It is shown that some structures and transition probabilities require extra steps in the solution methodology. These structures are discussed initially. The solution relies on the significant points of the eigenvalue equations. These are also explained. Distinction is made between real and complex roots and processes are developed to identify both. A final discussion is included on the numerical tolerances required to distinguish roots and avoid duplicates. Throughout this development, when using transition probabilities, the general structure of Figure 4-1 and the general vectors of 4.2a to 4.2d are assumed. When dealing with cases where certain transitions do not exist, the probabilities are set to zero Initial solution to internal equations It is initially assumed that the internal steady-state probabilities have the analytical form of equations 4.4a to 4.4d. 69

89 p(n, 1, d j ) = X n D j where 4.4a p(n, u i, d j ) = X n U i D j 1 i s 4.4b p(n, 1,1) = X n 1 j t 4.4c p(n, u i, 1) = X n U i 2 n N 2 4.4d Introducing vectors D u and D d as follows, the steady-state probabilities can be represented in vector format. U 1 U 2 D u = U 3 [ U s ] 4.5a D d = D 1 D 2 D 3 4.5b [ D t ] In a process similar to Chapter 3, but in matrix form, the initial analytical form of the internal probabilities 4.4a to 4.4d is replaced back into the internal equations and organized and simplified. After introducing scalars A and B such X = A this results in two principle equations B as follows. The complete derivation of these equations is explained in detail in Appendix F. Z ut D u 1 Y u A PuT R ut D u BD u = 0 4.6a Z dt D d 1 Y d 1 A P dt R dt D d 1 B Dd = 0 4.6b Factoring out D u and D d, 4.6a and 4.6b are rewritten as (Υ BI)D u = 0 4.7a 70

90 (Δ 1 B I) Dd = 0 4.7b where I is the identity matrix and Υ = Z ut 1 Y u A PuT R u T 4.8a Δ = Z dt 1 Y d 1 P dt R dt 4.8b A Equations 4.7a and 4.7b indicate that B and 1 are eigenvalues of matrices Υ and Δ, B respectively. The solution therefore involves finding values of A at which one of the eigenvalues of Υ and one of the inverse eigenvalues of Δ become equal. These values of A will be referred to as roots throughout the rest of this work. Also for ease of reference, matrix Υ and the term upstream matrix are used interchangeably as Υ includes transition probabilities from the upstream machine Markov chain only. Similarly Δ and the term downstream matrix are also used interchangeably. There are s + t A, B scalars the aggregated total of non-operational states for the upstream and downstream machines that simultaneously satisfy the two eigenvalue equations 4.7a and 4.7b. It is also evident that at any A, there are s upstream matrix eigenvalues and t downstream matrix eigenvalues. In general, both A and B can be complex numbers. For every pair of A and B which simultaneously satisfies 4.7a and 4.7b, there is one scalar X, as well as one vector D u and one vector D d. Putting together all s + t pairs of A and B, there will s + t scalars X which are put into a row vector, as will be shown below. The s + t vectors D u and D d are also arranged in two matrices, every column of which is associated with one A and B. When the C constants are found next, these are arranged in vector form. 71

91 Similarities are apparent between the simultaneous eigenvalue equations 4.7a and 4.7b and the characteristic polynomial of multiple sequential failures machines system of Chapter 3. This is not incidental. It is the direct result of the problem formulation introduced in [4], adopted in [15] and [17] and in this thesis Solution to the simultaneous eigenvalue equations Because the upstream and downstream matrices in equations 4.7a and 4.7b have multiple eigenvalues, it was found that the best approach to solving for identical eigenvalues of the matrices was to find a pair of eigenvalues, where one eigenvalue is selected from the eigenvalue set of the upstream matrix Υ, and the second is the inverse of an eigenvalue selected from the eigenvalue set of the downstream matrix. The difference between these two values is defined as a function of A and values of A are sought such that the difference approaches zero. In other words, points A are sought at which the difference between one upstream eigenvalue and one inverse downstream eigenvalue becomes zero (or very small, in a numerical implementation). Since there is no information to determine which eigenvalues from the vector of eigenvalues may become equal, all eigenvalues are paired together. At every root A, the equal upstream and inverse downstream eigenvalues is the B. The solution to the eigenvalue equations relies on the interaction between the eigenvalues of the upstream and downstream matrices. The solution identifies the points of singularity of these matrices when the eigenvalues approach zero or infinity and these are used to delineate intervals for search algorithms, and as starting points for other root-finding procedures. These are generally referred to as poles and will be defined precisely in subsequent sections. 72

92 In the general sense, finding the real roots of 4.7a and 4.7b is relatively straightforward. Finding the complex roots is a much more complicated process that generally requires starting points. These starting points are selected from the real points of intersection between the upstream and downstream matrix eigenvalues. Therefore the general procedure is to first find the real roots and intersections first. It was observed that the structure, existence and magnitude of probabilities of transition in the system s Markov chain, i.e. Figure 4-1, determine the relative ease with which the simultaneous eigenvalue equations can be solved, which affects the choices in developing that solution methodology. Also, viewed from a manufacturing systems standpoint, different transition structures in the Markov chain can be interpreted as identifying different types of practical manufacturing scenarios. These structures are explained in further detail below and numerical examples are provided to describe the behavior of each. In order to ensure the generality and efficiency of the solution, the solution methodology is developed in a general but progressive manner that gets more complex only if the problem requires. If at any point all the roots are found, the solution algorithm will conclude without going through the remainder of the process. This allows for simpler cases to solve faster, but also that more difficult cases can be dealt with. The remainder of this section details first the structure of transitions for several practical manufacturing system interpretations from the original Markov chain of Figure 4-1, then explains and identifies the points of singularity of the upstream and downstream matrices. The procedures for real and complex roots are discussed with a view to the transition structures shown and an overall algorithm is developed to yield the final solution. 73

93 Approximate Poisson distribution Using the Markov chain model to approximate a known probability distribution is a main theme of this thesis, in this and subsequent chapters. The focus here is on Poisson distribution, but the same analysis can be extended to other distributions, including Weibull and triangular. The main idea is to have the number of time-cycles the system spends in a non-production state given that it started in a production state, be approximately equal to a Poisson random variable. In the Markov chain model, this is achievable by having a single operational state, multiple nonoperational states and failure that is only possible into the first non-operational state. Once in a non-operational state, the system can stay in that state, transition forward to the immediate next non-operational state or get repaired back to the operational state. In this model, if the system starts in the operational state, transition probabilities can be set so that the number of time cycles until the system returns back to the operational state is a Poisson random variable. This structure is shown in Figure 4-2 for upstream and downstream machines. This is based on the general transition structure of Figure 4-1 with the probabilities of non-existing transitions set to zero. A random variable X is said to have a Poisson distribution with parameter λ > 0 if its probability mass function takes the values of equation 4.9 [42]. By generating the Poisson cumulative distribution function CDF up to 0.99 and minimizing the error between the probability of spending k time cycles in a non-production state and the Poisson CDF, the number of cycles spent in a non-production state can be approximated as a Poisson random variable. P(X = k) = λk e λ ; k = 0,1,2, 4.9 k! The expected value of a Poisson distribution is equal to its parameter λ. Intuitively this is the value one would expect to find the random variable taking if one could repeat the random 74

94 variable process an infinite number of times and take the average of the values obtained. In the Markov chain approximation, every time the chain exits the production state, one would expect the chain to spend λ cycles in a non-production state. This indicates that for every time cycle spent in the production state the system spends λ time-cycles in a non-production cycle. One example interpretation of the Poisson approximated Markov chain is to represent a demand process. In this interpretation, the operational state represents the occurrence of demand and time spent in non-operational states represents time between two consecutive demand occurrences which is approximated by a Poisson random variable. The Poisson parameter λ is approximately equal to the number of time steps between two demand occurrences. Figure 4-2- Upstream and downstream Markov chains representing Poisson distributions Wait/repair state with a Poisson demand This structure is shown in Figure 4-3. In this model, the upstream has two nonoperational states (u 1 & u 2 ) that do not transition into each other. These are referred to as repair and wait" states. The operational state can transition into either of the two nonoperation states and repair is possible from either of the non-operational states. The downstream machine represents a Poisson distribution. This model may be interpreted as a supply/demand 75

95 system where the upstream machine is a manufacturing facility with periods of stochasticallyoccurring non-production in the repair state and periods of controlled deliberate non-production in the wait state. Demand occurs with a Poisson distribution. The buffer in this case would represent the finished goods inventory. Performance of this model is best understood in terms of balancing the upstream machine s isolated efficiency against the downstream demand. Efficiency is defined as the number of parts produced by the machine regardless of its interaction with buffer and downstream machine; in mathematical terms, this is defined in 4.10 [15]. This model forms the basis of the next chapter where a model is built to control the periods of non-production in the wait state. The repair state represents stochastic breakdowns of the production plant. Extensive use of efficiency is made to study transient system performance under control. e = pu 1 r u + pu 2 1 r u

96 Figure 4-3- Markov chain of system with wait and repair states and Poisson approximated demand 77

97 Different operational mean times with Poisson Demand Similar to the multiple parallel states problem introduced in Chapter 3 which is developed in [15], a transition structure with transitions between the operational state and independent, parallel non-operational states may be interpreted as a machine with different independent failure modes. An example Markov chain is shown in Figure 4-4 with three nonoperational states for the upstream machine. By choosing appropriate transition probabilities for each non-operational state, this system can be interpreted as having short, medium and long term failure modes, each with its own mean time to failure MTTF and mean time to repair MTTR with a demand modeled as occurring in Poisson-distributed time cycles. Mean time to failure (in time cycles) for each state is defined as the inverse of the failure probability; similarly mean time to repair (in cycles) is the inverse of the repair probability. Upstream machine efficiency is defined similarly to the wait/repair state system of the previous section [15]. This is shown mathematically in equation e = 1 + pu i s i=1 r u i

98 Figure 4-4- Upstream machine with different operational mean times and Poisson downstream states Mixed products with Poisson demand To represent an unreliable machine that has a probabilistic distribution of production cycles, e.g. a machine that makes mixed products, the transition structure of the upstream machine was modified as shown in Figure 4-5. In this example, state A represents an auxiliary state from which there are only two possible transitions: the machine can either fail into a nonproductive state (u 1 ) or enter a production cycle requiring a probabilistically varying number of cycles to complete (u 2, u 3,, u s ). The machine transitions back to the auxiliary state only after production is complete at u s. The downstream machine in Figure 4-5 represents Poisson demand. 79

99 Figure 4-5- System with mixed products upstream, Poisson demand downstream General transitions This is the general transitions structure of Figure 4-1. Although it is not immediately applicable to a practical manufacturing system interpretation, it is critical in the sense that none of the transition structures mentioned so far can be solved independently without modeling the system in the general structure of Figure Poles and points of singularity The upstream machine matrix Υ has a point of singularity when A = Y u. At this point, one of the eigenvalues of the upstream matrix approaches infinity. This is also referred to as the upstream pole. One point of singularity for the downstream matrix Δ is where the determinant of Δ becomes zero. At this point, one (or more) of the eigenvalues of the downstream matrix approaches zero and the inverse of that eigenvalue approaches infinity. This point is referred to as the downstream pole. A second point of singularity of the downstream machine is at A = Y 1 d. At this point Δ becomes meaningless, but its eigenvalues do not change behavior around this point, therefore it is not referred to as a pole. 80

100 Real roots and intersections Real roots are real points A where the value real or complex of one upstream eigenvalue becomes equal to the real or complex value of one inverse downstream eigenvalue. Real intersections are points where the real parts of upstream eigenvalues and inverse downstream eigenvalues are equal, but imaginary parts are unequal. The procedure developed in this section finds both the real roots and real intersections. Real intersections are used as starting points for the algorithm that finds the complex roots. As pointed out earlier, the structure and probabilities of transitions in the Markov chain affect the interactions of upstream and downstream eigenvalues which in turn affect the intervals where the solution is applied. Therefore, in this section, examples are shown to demonstrate the range of possible interactions between upstream and downstream eigenvalues. These are referred back to the Markov transition structures of to The solution procedure is developed in a manner that encompasses this range of possible interactions. For all of the cases explored, when analyzing the behavior of the real roots and intersections of the simultaneous eigenvalue equations, the upstream and downstream eigenvalues behave in one of two ways: when the upstream pole A = Y u is smaller than downstream pole (A Δ = 0) or when the downstream pole is smaller than the upstream pole. The first case was generally observed when failure probabilities of the downstream machine were relatively small in magnitude compared to its repair probabilities. The second case was observed more or less when the downstream failure probabilities were large compared to repair probabilities. If the machines are interpreted as machines in a manufacturing system, large failure probabilities do not represent practical manufacturing scenarios, however, other 81

101 interpretations are possible as shown in Section Also, for the sake of generality, a solution needs to include all possible interactions between the two sets of eigenvalues. Figure 4-6 shows an example of the real parts of upstream and inverse downstream eigenvalues as a function of A for a system with general upstream and downstream transitions (Section ) when the upstream pole is smaller than the downstream pole. The upstream eigenvalues (eigenvalues of Υ in 4.8a) are shown in solid blue, and inverse downstream eigenvalues (inverse eigenvalues of Δ in 4.8b) in dashed red. The upstream and downstream poles as defined above are shown as straight vertical lines where one eigenvalue approaches infinity. The x-axis represents real values that A can take and the y-axis represents the eigenvalues. The intersections of the red and blue lines determine the real intersections of upstream and inverse downstream eigenvalues. All intersections may not be roots as there may be points where the real components of upstream eigenvalues and inverse downstream eigenvalues are equal but the imaginary parts are not equal. 82

102 Figure 4-6- Real upstream and inverse downstream eigenvalues as a function of A for a system with general transitions. Upstream pole is smaller than downstream pole. For the majority of example cases where the distinct pattern of Figure 4-6 was observed, the real roots and intersections were observed to be confined between the two poles, the inverse downstream eigenvalues (dashed red lines) were observed to be larger than the solid blue upstream eigenvalues and the downstream pole (A Δ = 0) was found to be larger than the upstream pole A = Y u. Also the upstream eigenvalues were confined between zero and 1 and inverse downstream eigenvalues are larger than 1. These observations result directly from assumptions for the values of failure and repair probabilities. At the upstream pole, one upstream eigenvalue descends from positive infinity and intersects all the inverse downstream eigenvalues. The last point of intersection is at A = 1. At this point, one of the inverse downstream eigenvalues starts descending toward negative infinity and intersects all the upstream eigenvalues. No intersection is observed outside the two poles in this case. 83

103 Based on these observations, a potential solution is to use the bisection method of successive interval halving [45] to look for the intersections of the largest upstream eigenvalue and inverse downstream eigenvalues between the upstream pole A = Y u and A = 1. The rest of the intersections can be determined using the bisection method to find the intersections of the smallest inverse downstream eigenvalue and all upstream eigenvalues between the points A = 1 and the downstream pole A Δ = 0. In order to know which eigenvalue is the largest and which is smallest, it is necessary to devise a sorting scheme. It was found a simple sorting scheme based on the magnitude of the real parts of the eigenvalue is sufficient for this solution methodology. This scheme is consistently applied through the rest of the solution. Where reference is made to a number for an eigenvalue, it refers to the position of that eigenvalue in the sorted vector of eigenvalues. If, however, downstream failure probabilities are large, this drives the downstream pole A Δ = 0 to be smaller than the upstream pole and breaks up the pattern of intersections between upstream and downstream eigenvalues. An example is shown in Figure 4-7 for machines with general transitions where the downstream pole is negative and one inverse downstream eigenvalue is consistently smaller than upstream eigenvalues and intersects all upstream eigenvalues outside of the two poles. In similar cases, intersections were observed on the negative x-axis. These negative intersections were never found to be real roots, but as they have led to identification of negative complex roots which have been observed to occur, it is important to identify both positive and negative intersections. As a result, It is obvious that the procedure above for finding real roots and intersections must be modified to account for the when the downstream pole is smaller than the upstream pole. A noteworthy observation is that the two sets of eigenvalues still intersect at A = 1 and the equal eigenvalue at that point is B = 84

104 1. In fact, much similar to the polynomial of Chapter 3, A, B = 1 is a consistent root of the system. Figure 4-7- Real upstream and inverse downstream eigenvalues as a function of A for a system with general transitions. Upstream pole is larger than downstream pole. The two distinct patterns of Figure 4-6 and Figure 4-7 are not limited to machines with general transitions. Two similar patterns are shown in Figure 4-8 and Figure 4-9 for a system which has a mixed products upstream machine and a downstream machine with back and forth transitions only between adjacent non-operational states, as well as transitions between the operational and non-operational states. The pattern in Figure 4-8 appears much differently than Figure 4-6 due to downstream inverse eigenvalues branching and multiple inverse eigenvalues approaching infinity at the downstream pole. 85

105 Figure 4-8- Eigenvalues for a system with mixed products upstream and back and forth transitions downstream. Upstream pole is smaller than downstream pole. 86

106 Figure 4-9- Eigenvalues for a system with mixed products upstream and back and forth transitions downstream. Upstream pole is larger than the negative downstream pole. Machines representing Poisson distributions exhibit a rather different eigenvalue behavior. An example plot is shown in Figure The downstream pole is very close to zero and the upstream pole is only slightly larger than that. This is due to the large failure probabilities required to generate the transitions that create the approximate Poisson-distributed failed cycles. Also, there are multiple inverse downstream eigenvalues approaching infinity at the downstream pole and therefore multiple intersections in the small interval between the two poles. There are multiple branching points where one eigenvalue suddenly changes from being real to being a complex number. There are also points where two eigenvalues cross over and exchange paths. The magnitude of the eigenvalue intersections also appears to be affected. Intersections range from very small numbers confined between the two poles to very large numbers. This was found to be more or less related to the magnitude of 1 Y d, where Y d was the 87

107 probability of remaining in the downstream operational state and 1 was a point of singularity as Y d defined in The structure of Figure 4-10 was almost universally observed for machines that represent Poisson-approximated failed states. 88

108 Figure Upstream and inverse downstream eigenvalues for a system with Poissonapproximating upstream and downstream machines Given the range of possible interactions between upstream and inverse downstream eigenvalues explored in Figure 4-6 to Figure 4-10, it can be seen that an algorithm that seeks to identify all intersections and roots between upstream and inverse downstream eigenvalues must consider all pairs of upstream and inverse downstream eigenvalues. In addition, to account for the varying locations of the two poles relative to each other, the algorithm must determine the location of the poles, then assign a vector of bounds arranged by size, where every two consecutive elements are used to bind a search interval where bisection is applied. The largest element in the vector of bounds is a multiple of 1 Y d ; 4 Y d was determined empirically to be the value that covers all situations explored. 1 Y d and 2 Y d are added as well. On the positive side of the horizontal axis the upstream pole, downstream pole (when positive) and 1 are 89

109 the other elements. On the negative side of the horizontal axis, an empirically determined division of [ 4 Y d 2 Y d 1 Y d ] was seen to cover all cases. The upstream pole and 1 Y d are found from the Markov chain probabilities. The downstream pole is found using the bisection method to determine the point of zero determinant. The interval to conduct this search is between 1 and 1 Y d. In order to account for the eigenvalue streams that branch off and cross over as shown in Figure 4-8, it was found that the most practical approach was to add any point of intersection found back into the original vector of bounds to generate new intervals and iterate the search process within these new intervals until no new intersection is detected. The process for finding real roots and intersections can be summarized as follows. Step 1. Identify all the points of significance: upstream pole, Y d from the Markov chain, downstream pole from a numerical interval halving process. Step 2. Generate an initial vector of bounds: Y d [ 4 Y d 2 Y d 1 Y d 0 DP UP ] ; where DP = Y d Y d Y d downstream pole, UP = upstream pole. Sort by magnitude. Step 3. Run bisection between every two pairs of elements in the vector of bounds for all eigenvalue pairs. Check results against the vector of bounds and points already found to discard duplicates. Step 4. If there is a new point of intersection, generate two new intervals with the intersection found and the bounds within which the intersection was found. 90

110 Repeat the search process within these new intervals until no new intersection is found. Step 5. Separate real roots and real intersections by checking eigenvalues. If at any point both the real and imaginary parts of upstream eigenvalues and inverse downstream are equal, this point is a root. If only the real parts of upstream eigenvalues and inverse downstream eigenvalues are equal, this point is an intersection. A simplified flowchart based on these steps in shown in Figure

111 Figure Simplified algorithm flowchart for real roots and intersections The real roots and intersections process of Figure 4-11 was found to be quite successful in identifying real roots and intersections. Nevertheless, there were cases where all provisions for eigenvalues branching and crossing and large real roots failed to detect all the real roots and intersections. In all such cases however, the procedure for complex roots described next compensates for this lack of success by detecting real roots that may have been bypassed. 92

112 Complex roots The complex roots algorithm relies on the poles, points of singularity and real roots and intersections to generate search intervals. The horizontal A axis is partitioned into starting points for the application of a secant root finding algorithm [45], which again must be applied between every pair of upstream eigenvalues and inverse downstream eigenvalues. The function whose roots are being sought is the difference between an upstream eigenvalue and an inverse downstream eigenvalue. Since it is not clear which eigenvalues lead to roots, it is inevitable that all eigenvalue pairs be evaluated. The sorting scheme of Section , where eigenvalues are sorted according to the magnitude of their real components, is employed to maintain a consistent sequence of eigenvalues. The secant method is a quasi-newton method in the sense that it represents a finite difference approximation of the function derivative in Newton s method. It starts with two initial values and approximates the root of the function as the root of a secant line between the two endpoints. This generates a new point which is used with the second of the two initial values to find a new secant line and secant root. At every iteration the two most recent values are used to generate secant lines. This is repeated until convergence when either the function value or the difference between two consecutive points becomes very small. Secant method is appropriate for functions which cannot be represented analytically. This was one reason for the choice of the secant method here. The other reason was that secant method in the problem here generates complex numbers from real starting points due to the successive function evaluations crossing into the complex domain, as eigenvalues become complex. This is a desirable characteristic as there is no a priori information on the magnitude or location of the imaginary components of the roots, unlike the real components of roots which are bound and searched. 93

113 The algorithm begins by generating a vector of starting points that sorts all the real roots and intersections of Section , as well as the significant points from the initial vector of bounds, 0, DP, UP, 1 Y d. The secant method is applied using each consecutive pair of elements in the vector of starting points as initial values. In many example cases this is sufficient to find all complex roots. With sparser transition matrices and large failure probabilities, particularly in cases with Poisson-approximating machines, complex roots occur which have real components that are very close or imaginary components that are very small and these can be difficult to find numerically. To account for this, an iterative process is employed. If the first application of secant does not find all the roots, whose number is known, roots found from the initial application of secant are added back into the vector of starting points. These are, for the most part, complex roots, as the real roots algorithm in a majority of cases will have found all real roots. The sorting scheme based on the magnitude of the real component is again employed to arrange the new starting points. The secant search is then repeated for every new interval generated. These new roots from the secant method are again added to the vector of starting points and the process repeated until no new roots can be found from the starting points. If at any point all roots are found, the algorithm is concluded. If the procedure as described above does not find all the roots, the last search vector is systematically divided to generate new starting points for the secant method. This is done by dividing in half the interval between every two consecutive points in the vector of secant starting points to generate new starting points. Secant is applied again using every two new points as initial values. If this results in all the roots, the algorithm is concluded. If new roots are found but roots are still missing, these new roots are added back to the vector of secant starting points and 94

114 secant applied to every new interval generated. This process is repeated two times, dividing the interval between every two points into 2 and 4 parts. The great majority of example cases explored for this work were solved at this point. However, exceptional cases remained with missing roots. These were universally observed to be Poisson approximating upstream and downstream machines with the missing root being a complex root very close to a root that is already found and close to Y u, the probability of the upstream machine remaining operational. It was also observed that these cases exhibited similar behaviors when studying the real and complex parts of the roots. When viewed as a surface function of real and complex components, the difference between upstream and inverse downstream eigenvalues has an elliptic-shaped trench that is close to a circle centered at 0 with a radius of Y u. It appeared therefore that if a root occurred along this trench, the secant method as described would be unable to find it. Therefore, once the application of the secant method is concluded but roots remain unfound, the function is redefined as a function of real and imaginary components. This function is then minimized across all eigenvalue pairs both inside the circle centered at 0 with a radius of Y u and slightly off it by a tolerance on both sides. This is demonstrated in detail with a numerical example in Section The process for finding complex roots can be summarized in the following steps. Step 1- Generate a sorted vector of starting points that includes real roots and intersections, 0, 1, DP, UP, Y u, 1 Y d. Step 2- Run secant between every two starting points across all eigenvalue pairs, finding roots of a function that is defined as the difference between one upstream eigenvalue and one inverse downstream eigenvalue. 95

115 Step 3- Discard duplicates and points that are not roots. Exit if all roots are found. Step 4- Add roots from secant to the vector of starting points. Search and repeat until no new roots are found. Exit if all roots are found. Step 5- Divide the starting points space by 2, generating a new starting point vector. Repeat steps 2 to 4. Step 6- Divide the starting points space by 4, generating new starting points vector. Repeat steps 2 to 4. Step 7- Define function as the difference between a pair of upstream eigenvalues and inverse downstream eigenvalues as a function of real and imaginary components. Minimize function for all eigenvalue pairs bound inside a circle centered at 0 with a radius of Y u. Check minimization results for roots. Exit if all roots are found. Step 8- Minimize function of step 7 for all eigenvalue pairs bound between two ellipses straddling circle with a radius of Y u, centered as 0. Check minimization results for roots. Exit if all roots are found. A simplified flowchart of these steps is shown in Figure

116 Figure Simplified flowchart of complex roots solution process Tolerances and avoiding duplicate roots The process described so far generates a large number of results which have to be sifted through in order to find precise roots. Both the real roots solution algorithm and the complex roots solution algorithm search between all eigenvalue pairs within all intervals and at all endpoints. This may return results that are not roots or roots that have already been found, both 97

117 real and complex. If two roots are very closely spaced the convergence tolerances need to be narrow to allow the inclusion of both. The range of the magnitude of A values is also a major factor in the elimination of unreasonably large numbers and the setting up of tolerances. Since the real roots algorithm relies on a bisection process confined between two endpoints, the algorithm either converges to a real root or a real intersection within the interval or one of the endpoints of that interval. Endpoints can easily be eliminated using a check at every interval. There is no possibility of duplicate roots because different pairs of upstream and downstream eigenvalues do not intersect at the same root. There is a possibility of duplicate real intersections where a complex eigenvalue (from the upstream or downstream machines) occurs in conjugate pairs and both conjugates intersect another eigenvalue from the downstream or upstream machine. This is easily eliminated since both conjugates intersect at identical points. Therefore after eliminating endpoints and repetitions, the remainder of the convergence points from real roots solution algorithm are either real roots or real intersections. Distinction must be made between real roots and real intersections which as explained in Section are points where a pair of upstream and downstream eigenvalues has identical real components but different imaginary components. This is accomplished creating sorted vectors of upstream and downstream eigenvalues at every convergence point and checking whether any pair of downstream and upstream eigenvalues has equal real and imaginary components or a pair of downstream and upstream eigenvalues exists with equal real components but different imaginary components. The former will make up the list of real roots, the latter the list of real intersections. Both real roots and real intersections are stored for application of the secant algorithm. The locations of the equal eigenvalue pair within the sorted upstream and inverse downstream eigenvalue vectors is also stored to avoid duplicate roots in later stages of the solution algorithm 98

118 as explained below. Finally a check is made to ensure that A = B = 1 has been found as a real root. If not, it is manually added to the list of real roots. The complex roots algorithm starts with the application of secant method for every eigenvalue pair between every two endpoints as explained in Section Unlike the bisection method, secant method results are not confined between the two endpoints. Therefore results from the application of the secant method can converge on an endpoint, a real root that has already been found, infinity, points that are not roots, complex roots, and occasionally real roots that were missed in the application of the real roots algorithm. Duplicates frequently occur here as different eigenvalue pairs converge on points that are not necessarily roots. Also, the secant method is applied in a recursive algorithm, both before and after the division of endpoints which generates large numbers of points requiring efficient sifting processes. Checking for equal eigenvalues at every point is possible, but not very efficient. The best practice, it was found, is to systematically and sequentially eliminate unnecessary convergence points as follows. The first step is elimination of results that converge on secant endpoints. A vector of secant endpoints is maintained throughout the solution and updated according to whether any new root has been found. This vector, as explained in the previous section, consists of real roots and intersections, as well as complex roots from secant. Any point generated from the application of the secant method is compared against this vector of starting points. To ensure accuracy, tolerances were determined as a fraction of the order of magnitude of the point in question. This ensures that errors do not result from treating very small points and very large points with the same tolerance order of magnitude was found to be the tolerance that keeps unwanted points out but does not eliminate roots that 99

119 may be closely spaced. Separate checks are required on real and imaginary components to make certain both components are equal, as real and imaginary components may not have the same order of magnitude. The second step is elimination of unreasonably large points of convergence. The largest root encountered has been on the order of 10 3, therefore 10 5 was decided to be the value that would eliminate unnecessary numbers but would not allow any potential roots to be discarded. Elimination of large numbers requires a simple check on the real component of the point converged upon. The third step is to eliminate duplicate convergence points. This is done by finding convergence points that have real components that are close to within A check is then made to ensure that the imaginary components are within that same tolerance. The fourth step, mainly for increasing efficiency when applying recursions is comparing convergence points against previous convergence points. A vector is maintained of all points of convergence which is updated as more points are added. Any point is then compared to this vector and duplicates are discarded. At this point, a final check is made on all remaining points, making certain that there is a pair of equal upstream and inverse downstream eigenvalues. This point is taken as the root A, and the equal eigenvalue is B. The algorithm stops when there are s + t roots. Similar to the characteristic polynomial of Chapter 3, complex roots occur in conjugate pairs. Therefore, it is necessary to know whether a root that is found is real or complex. Relying on the magnitude of the imaginary component was found to be often undependable in determining whether a root is real or complex since 100

120 complex roots have been observed with small imaginary components. Since eigenvalues are by definition associated with a characteristic polynomial, if a root is real, all coefficients of the characteristic polynomial associated with the upstream or downstream matrix at that point are real and complex eigenvalues (if any) occur in conjugate pairs. If a root is complex, the coefficients of the characteristic polynomial will be complex and roots do not occur in conjugate pairs. This is a powerful way to separate real and complex roots. If at any root A eigenvalues are all real or occur in conjugate pairs, the root in question is real. If however, at a root A eigenvalues are complex and do not occur in conjugate pairs, that root is complex. For any complex root A, conj(a) is also a root Solution to the boundary equations and C s The same process developed in Chapter 3 is used here. After all the roots of the simultaneous eigenvalue equations are found, back substitution in equations F.6c and F.6d from Appendix F will find D u and D d. As every pair of A and B is associated with one column vector D u and D d, in a slight abuse of notation for representation purposes, the matrices populated by these vectors are also called D u and D d and shown in equation 4.12a. The simplest way to find X is X = A. X is arranged as a row vector that has the same number of columns as there are pairs of B A and B. Again the row vector of all X s is called X and represented in 4.12b. Constants C m ; 1 m s + t are introduced, the solution is rewritten in the form of equations 4.13a to 4.13d and C m are found such as to satisfy the boundary equations. The column vector of C m is called C in equation 4.12b. The representation in 4.12a and 4.12b is solely for illustration purposes, the vectors represented are not used in vector form, rather individual components are used. 101

121 U 1,1 U 1,2 U 1,s+t D 1,1 D 1,2 D 1,s+t D u U = [ 2,1 U 2,2 U 2,s+t D ] ; Dd = [ 2,1 D 2,2 D 2,s+t ] 4.12a U s,1 U s,2 U s,s+t D t,1 D t,2 D t,s+t C 1 X = [X 1 X s+t ]; C = [ ] 4.12b C s+t The final internal state probabilities have the following familiar form where appropriate values for C m, X m, U i,m, D j,m are taken from equations s+t p(n, 1, d j ) = C m X n m D j,m ; j = 1,, t, 2 n N a m=1 s+t p(n, u i, d j ) = C m X n m U i,m D j,m, i = 1,, s, j = 1,, t, 2 n N b s+t m=1 n p(n, 1,1) = C m X m ; 2 n N c m=1 s+t p(n, u i, 1) = C m X n m U i,m ; i = 1,, s, 2 n N d m=1 To find the C m, a system of linear equations is developed with C m and boundary state probabilities, whose solution finds both simultaneously. Boundary probabilities and constants C m are arranged in the following vector. The size of each element is included as a subscript. [p(0, u k, 1)] s 1 [p(1, u k, d l )] st 1 p(1,1,1) [p(1, u k, 1)] s 1 [C m ] s+t [p(n 1,1, d l )] t 1 [p(n 1, u k, d l )] st 1 [ [p(n 1,1,1)] st 1 [p(n, 1, d l )] t 1 ] 102

122 To find the matrix of coefficients, the expressions for p(0, u k, 1), p(1, u k, d l ), p(1,1,1) on the lower boundary are taken from relevant equations in Table D-1. Expressions for p(n 1, u k, d l ), p(n 1,1,1), p(n, 1, d l ) on the upper boundary are taken from relevant equations in Table D-3. p(1, u k, 1) and p(n 1,1, d l ) are expressed in terms of C m based on equations in Appendix G. The s + t C m are expressed in terms of both upper boundary and lower boundary probabilities: C j ; j = 1,, t are expressed in terms of upper boundary state probabilities and the remaining C i ; i = t + 1, t + s in terms of lower boundary probabilities. The equations from which these can be found are detailed in Appendix G. In matrix form, this system of linear equations can be written as: [L] (s+st+1) (2s+st+1) [0] (s+st+1) (s+3t+st+1) [0] s (2s+st+1) [X m U i,m ] s (s+t) [0] s (2t+st+1) [0] (s+t) s [C] (s+t) (2st+2s+2t+2) [0] (s+t) t [0] t (2s+st+1) [X m N 1 D j,m ] t (s+t) [0] t (2t+st+1) [[0] (st+1+t) (3s+st+t+1) [U] (st+1+t) (2t+st+1) ] [p(0, u k, 1)] s 1 [p(1, u k, d l )] st 1 p(1,1,1) [p(1, u k, 1)] s 1 [C m ] [p(n 1,1, d l )] t 1 [p(n 1, u k, d l )] st 1 [ [p(n 1,1,1)] st 1 [p(n, 1, d l )] t 1 ] = [ [p(0, u k, 1)] s 1 [p(1, u k, d l )] st 1 p(1,1,1) [p(1, u k, 1)] s 1 [C m ] [p(n 1,1, d l )] t 1 [p(n 1, u k, d l )] st 1 [p(n 1,1,1)] st 1 [p(n, 1, d l )] t 1 ] 4.14 where L is a matrix of lower boundary equation coefficients, U is a matrix of upper boundary equation coefficients, C is a matrix of internal equation coefficients associated with the constants C m, 0 is a matrix of zeros, etc. The size of each matrix or vector in equation 4.14 is given as a subscript. The system in 4.14 is again not linearly independent as one equation is a linear combination of the others. Replacing one row of the matrix of coefficients with the coefficients of a normalization equation that forces the summation of all steady-state probabilities to unity will also create linear independence. The normalization equation is expressed in terms of boundary equations and C m as shown in equation A simple numerical procedure for the solution of the systems of linear equations will then produce C m values and the boundary probabilities. C m are subsequently used in equations 4.13a to 4.13d along with X m, U i,m, D j,m 103

123 which have already been found to find the steady state probabilities of the internal states of the system. s s t s N 2 t s+t p(0, u i, 1) + p(1, u i, d j ) + p(1,1,1) + p(1, u i, 1) + C m X n m D j,m i=1 i=1 j=1 i=1 n=2 j=1 m=1 N 2 s t s+t N 2 s+t N 2 s s+t + C m X n n m U i,m D j,m + C m X m + C m X n m U i,m n=2 i=1 j=1 m=1 n=2 m=1 n=2 i=1 m= t s t t + p(n 1,1, d j ) + p(n 1, u i, d j ) + p(n 1,1,1) + p(n, 1, d j ) = 1 j=1 i=1 j=1 j= Performance measures Performance measures are identical to those in Chapter 3. The individual equations are not repeated here, but results are included in following sections. With a slight modification of terminology, production rate is renamed service rate when dealing with models that represent supply and demand since service rate is more appropriate when the removal of parts from buffer is the equivalent of demand satisfaction. This is seen in this chapter in the numerical results of Section 4.4 and in the next chapter Complete solution algorithm The following steps summarize the solution process explained in Sections to Step 6. Find real roots of the simultaneous eigenvalue equations of 4.7a and 4.7b. This process is summarized in Figure Step 7. Find complex roots, if any, of the simultaneous eigenvalue equations of 4.7a and 4.7b. This process is summarized in Figure Step 8. Assemble the complete roots and equal eigenvalues A and B. Step 9. Find U i,m, D j,m, X m according to equations F.6c and F.6d and X = A B. 104

124 Step 10. Find constants C m and boundary state probabilities from the system of linear equations 4.14 making use of normalization equation Step 11. Step 12. Find steady-state probabilities from equations 4.13a to 4.13d. Find performance measures as required Identical machines Identical machines have the same number of states with the same transition structure and probabilities between the states. It was found that for identical machines, the simultaneous eigenvalue equations 4.7a and 4.7b have a repeated root at A = B = 1. This must be accounted for when determining all the roots of the eigenvalue equations have been found. In addition, having a repeated root renders the system of linear equations 4.14 overdetermined. To find the C m constants and the boundary state probabilities in this case, either of two ways is proposed; one of the equations corresponding to the C m constants can be removed. The normalization equation is then replaced with one of the equations on the upper or lower boundary side depending on the position of the omitted C equation. An alternative solution is to replace the repeated root at A = 1 with A = 1 ε and A = 1 + ε and where ε is a very small number. 4.4 Numerical results This section demonstrates results from the application of the solution methodology of Section 4.3 to several problems representing multiple failures Markov chains with general transitions. Solutions are shown for examples from the various interpretations in Sections to , although not in the same order. This is intended not just to illustrate the accuracy of the solution process for the range of possible structures and probabilities, but to show how these different systems require to varying degrees the steps of the solution process. 105

125 For each case, the steady-state probabilities are either compared directly with the direct Markov chain solution of the steady-state probabilities which was introduced in Chapter 3 or compared indirectly through aggregated probabilities in the form of performance measures. Similar to Chapter 3, Pareto charts are employed to compare the direct and analytical probabilities. Discussion is made on these performance measures where practical manufacturing scenarios are being represented General transitions The example problems here illustrate the application of the solution methodology to the Markov chain system with transitions between all states, as shown in Figure 4-1. The first example is a system where the upstream pole is smaller than the downstream pole and the roots are confined between the two poles. The interaction of the upstream and inverse downstream eigenvalues for such a case was shown in Figure 4-6. The transition probabilities for this system as organized in the matrix structure of equation 4.5a to 4.5d are shown in Table 4-2. This table is included to show that in general when failure probabilities of the downstream machine P d are on the same order or smaller than the failure probabilities of the downstream machine P u, this results in an upstream pole that is smaller than the downstream pole, shown in Figure

126 Table 4-2- Transition probabilities and properties for a system with general transitions P u Z u R u P d Z d System properties Buffer 1 s t UP DP size Y d s= number of upstream failed states, t=number of downstream poles, UP=upstream pole, DP=downstream pole The real roots to the simultaneous eigenvalue equations for this problem are, as predicted in Figure 4-6, confined between the two poles and easily found from the process summarized in Figure There are two complex-conjugate roots which are found with a single application of the secant complex roots algorithm summarized in Figure These roots are listed in Table 4-3. The steady-state probabilities found from the solution methodology are presented graphically in Figure 4-13 for illustration purposes. Y u Y d R d 107

127 Table 4-3- Roots for the system of Table 4-2 Roots of simultaneous eigenvalue equations Real Complex i i Figure Steady-state probabilities for the system of Table

128 Table 4-4 shows a comparison of performance measures between the analytical method presented here and the direct numerical solution of the Markov chain equations, as well as percentage difference between the two solutions. The small probability of starvation, rather large probability of blockage, and the production rate at about 50% of Table 4-4 indicate the tendency of the upstream machine to fail less frequently than the downstream machine, hinted at by the transition probabilities of Table 4-2. The Pareto chart of Figure 4-14 compares the largest difference between the steady-state probabilities up to 87% cumulative error. The horizontal axis represents the states with the largest errors, the left vertical axis represents the difference between the two solutions and the right vertical axis the cumulative percentage difference. It is seen that the two solutions are compatible to Table 4-4- Analytical and direct comparison of performance measures for system of Table 4-2 Analytical solution Direct Markov chain solution Percent error Average production rate E-11 Average buffer level E-12 Probability of starvation E-12 Probability of blockage E

129 Figure Pareto chart comparison of direct and analytical steady-state probabilities in Table 4-2 For this example, a further verification of results is shown in Table 4-5 where performance measures are compared against a simple discrete event simulation model. Percentage errors between the results from the discrete event simulation model and the analytical solution is also reported as well as the 95% confidence intervals for estimation of simulation results. Despite the fact that the discrete event simulation model is limited in the number of simulation attempts, results are seen to be close and within the 95% confidence bounds. 110

130 Table 4-5- Comparison of results from analytical solution and simple discrete event simulation Average buffer level Probability of starvation Probability of blockage Analytical solution Discrete event simulation Percentage error Confidence interval lower limit Confidence interval upper limit If the failure probabilities of the downstream machine are larger than those of the upstream machine, this was seen as possible to create a system where the downstream pole is smaller than the upstream pole. The real parts of upstream and inverse downstream eigenvalues for such a system were shown in Figure 4-7. The example below highlights the solution for this system. Without listing all the transition probabilities, Table 4-6 lists some of the properties of this system. Although not a strict measure, the averages of the vectors of upstream and downstream failure probabilities, P u and P d respectively, are included to show that in general when the downstream failure probabilities are larger than the upstream failure probabilities, this results in the downstream pole being smaller than the upstream pole, as shown in Table 4-6 with a negative downstream pole. Table 4-6- Some system properties for a general transitions system with a negative downstream pole Average of P u Average of P d Y u Y d System properties Buffer size s t UP DP 1 Y d

131 Table 4-7 lists the real and complex roots of the simultaneous eigenvalue equations for this system. It is seen that despite having a negative downstream pole, all the real (and complex) roots of the simultaneous eigenvalues of this system are positive. The real roots are not confined between the two poles, but are found easily from the process described in Section The only complex root is easily found by a single application of the secant method as explained in Section Figure 4-15 shows the steady-state probabilities found from the analytical method presented. Figure 4-16 is a Pareto chart demonstrating the accuracy of the solution when compared to the direct Markov chain solution of the steady-state probabilities. The two solutions are compatible to the order of Table 4-8 compares performance measures between the two solution methodologies. Since the downstream machine fails much more frequently than the upstream machine, as seen by the averages of the failure probabilities in Table 4-6, it is expected that parts will accumulate in the buffer, creating a buffer that is mostly full, an upstream machine that is mostly blocked, a downstream machine that is almost never starved and an average output level that is comparatively low due to the frequent downstream breakdowns. These are confirmed in Table

132 Table 4-7- Real and complex roots of eigenvalue equations Roots of simultaneous eigenvalue equations Real Complex i i Figure Steady-state probabilities found solving the system of Table

133 Figure Pareto chart of difference between analytical and directly found steady-state probabilities Table 4-8- Comparison of performance measures found analytically and directly from the Markov chain Analytical solution Direct Markov chain solution Percent error Average production rate E-10 Average buffer level E-10 Probability of starvation E-07 Probability of blockage E

134 To study a problem with identical machines, a system with general transitions is studied where the downstream pole is larger than the upstream pole. Table 4-9 lists some of the system parameters without listing all transition probabilities. As shown in Table 4-10, the roots of the simultaneous eigenvalue equations are all real and confined between the two poles. The solution is therefore quite straightforward and there is no need for the application of the complex roots process. Noteworthy is the double root at 1, which has been seen consistently in all systems with identical machines. Once the roots of the simultaneous eigenvalue equations are found, the procedure of Section is used to find the steady-state probabilities. Table 4-11 shows a comparison of performance measures found from the analytical solution presented and the direct Markov chain solution of probabilities. In addition to confirming the accuracy of the solution, Table 4-11 shows the symmetry of blockage and starvation probabilities as well buffer that is half full in steady-state, as was seen in Chapter 3 for a system with two identical machines and sequential transitions. In fact all steady-state probabilities are symmetrical along half point of buffer size with appropriate replacement for up and down machines, for instance p(3,1, u 2 ) = p(n 3, d 2, 1). This is shown in Figure 4-17 which shows the analytically found steady-state probabilities. Table 4-9- System parameters for a system with identical machines and general transitions System properties Buffer size s t UP DP 1 Y d

135 Table Roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real Table Comparison of performance measures Analytical solution Direct Markov chain solution Percent error Average production rate E-11 Average buffer level E-10 Probability of starvation E-10 Probability of blockage E

136 Figure Analytical steady-state probabilities for a system with identical machines and general transitions Approximate Poisson distribution The examples in this section will illustrate the application of the solution methodology developed to example problems with upstream and downstream machines representing approximated Poisson distributions. This is the system shown in Figure 4-2. The transition probabilities were found from a numerical procedure developed to approximate a Poisson distribution from the time-cycles the machine spends in a non-production state. This is achieved by generating the Poisson cumulative distribution function (CDF) and probability distribution functions (PDF) up to 99% of CDF. An initial guess is then made based on these values for the number of non-production states required. The transition probabilities are found by minimizing the errors between the probability of spending a number of time cycles in the non-production states and the equivalent probability from the Poisson distribution in equation

137 As touched upon in Section , the mean of the Poisson distribution λ represents the value one would expect to find the random variable taking if the experiment was repeated for large numbers of times. In the example problems here, λ represents the expected number of nonproduction cycles the machine goes through before returning to the production state in the steady-state of system operation, if the machine started in a production state. Therefore, for every time cycle spent in a production cycle, the Poisson-approximating machine spends λ time-cycles in a non-production state. The rate of being in a production state in steady-state is labelled μ and found from the following equation. μ = 1 λ When both upstream and downstream machines approximate Poisson distributions, μ upstream is the rate of putting parts in the buffer and μ downstream is the rate at which parts are removed from the buffer. If viewed as a supply and demand system, the buffer represents a finished goods inventory, the upstream machine represents a production plant and the downstream machine represents external demand. At every time-cycle when the upstream machine is operational, a finished product is added to the finished goods inventory. At every time cycle when the downstream machine is operational, demand occurs and a part is moved from the finished goods inventory. This representation is frequently invoked to explain the steady-state performance measures when dealing with Poisson-approximating machines. As will be shown in the following examples, it was found that Poisson approximating problems represent the extremes of the multiple failures problem with general transitions attempted in this chapter, with transition probabilities that approach the extremes of 0 and 1. Therefore, the roots dealt with here can be both very small and very large, compared to the roots 118

138 of other example problems. To make matters worse, roots are mostly complex with some being very closely spaced with imaginary parts that approach zero. It was in response to these numerical difficulties that the sequential solution methodology of Section was developed. The examples in this section are therefore chosen to reflect the different aspects and steps of the solution. To show the accuracy of the solution, as before a combination of Pareto charts and comparison of performance measures between the analytical and direct Markov chain solutions is employed. Examples are represented by the mean of Poisson distribution λ for upstream and downstream machines Upstream Poisson rate 2, downstream Poisson rate 4.8 This example looks at a system with an upstream machine that represents a Poisson distribution with a mean λ of 2 and a downstream machine with a mean λ of 4.8. The transition probabilities are listed in Table As explained in Section , transition probabilities are set to zero where transition is not possible. This table is intended to draw attention to the large magnitude of the failure probabilities, especially for the downstream machine P d reflected in the small upstream pole and the downstream pole that is almost zero, as shown in Table This is a feature of Poisson approximation as these probabilities are required to generate the time-cycles of non-production. The rest of system properties, including the rates of adding to and removing parts from the buffer are summarized in Table

139 Table Transition probabilities for a system with Poisson approximating machines with upstream Poisson rate 2 and downstream Poisson rate 4.8 P u Y u Z u R u P d Y d Z d R d Table System properties for Poisson approximating machines with upstream Poisson rate 2 and downstream Poisson rate 4.8 System properties Buffer size s t UP DP 1 Y d μ upstream μ downstream E

140 Figure 4-18 shows the interaction of upstream and inverse downstream eigenvalues for this system. As seen in this figure, there are multiple negative intersections, none of which turns out to be a root. The downstream pole is very close to zero, with inverse downstream eigenvalues approaching both negative and positive infinity at the pole. The pattern of Figure 4-6 is no longer applicable here. Table 4-14 shows the real and complex roots of the simultaneous eigenvalue equations for this problem. These were found after two applications of the secant method described in Section with a division of the search space by 2. The smallest complex root is on the order of 10 2 and the largest real roots is close to 1 Y d. The Pareto chart of Figure 4-19 verifies the solution methodology with maximum errors between the analytical and direct solutions at 10 10, while Table 4-15 offers insight into how the system operates. Comparing the rates at which parts are added to or removed from the buffer, μ upstream and μ downstream in Table 4-12, it can be seen that parts are added to the buffer almost twice as fast as they are removed from it. This will create a buffer which is mostly full, reflected in the small probability of starvation and large probability of blockage, as well as an average buffer level that is close to capacity. Production rate is renamed to the more appropriate term service rate. In this example, service rate is small because demand happens infrequently. In this example, it is seen that the limiting factor for service rate is the rate of downstream demand occurring μ downstream, since there are usually enough parts in the buffer to satisfy demand, given that the buffer is mostly full. 121

141 Table Roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real Complex i i i i i i i i i i i i i i Figure Upstream and inverse downstream eigenvalues for a problem with Poisson approximating machines with upstream Poisson rate 2 and downstream Poisson rate

142 Table Comparison of analytical and direct solutions to performance measures Analytical solution Direct Markov chain solution Percent error Average service rate E-09 Average buffer level E-08 Probability of starvation E-04 Probability of blockage E-08 Figure Pareto chart of difference between analytical and direct methods Upstream Poisson rate 4, downstream Poisson rate 6 System properties for this example problem are shown in Table It is seen that both upstream and downstream poles are close to zero, with the downstream pole almost at zero. This arrangement of poles and the interaction of upstream and inverse downstream eigenvalues are shown in Figure This figure is significant in that it demonstrates why it is necessary to 123

143 identify negative real intersections. As mentioned in Section , negative intersections have never been observed to be real roots, but including these negative intersections in the vector of starting points for secant is essential to determine complex negative roots of the simultaneous eigenvalue equations. In this example there are two such negative complex roots, which are listed in Table The larger negative root could only be found after three applications of the secant method dividing the solution space by 4. Table 4-18 compares the steady-state performance measures between the analytical method of this chapter and the direct Markov chain solution. Comparing the rates of upstream and downstream machines, the upstream machine adds parts to the buffer faster than the downstream machine can remove them (or demand occurs), resulting in a small probability of starvation and a buffer that is mostly full on average. At almost 30%, the probability of blockage is smaller than the previous example because of the smaller difference between the rates of adding and removing parts from the buffer. Average service rate reflects the rather small rate of demand occurring and again downstream demand rate is seen as the limiting factor in service rate. The large error percentage between the analytical and direct probabilities of starvation is a result of the fact that both these numbers are very close to zero. In the author s experience, as buffer size grows larger, the Markov chain system space becomes larger and the direct Markov chain solution starts to lose accuracy, especially at smaller probabilities. The solution offered here which is independent of buffer size is more accurate in these cases. This is, in the opinion of the author, seen in the Pareto chart of Figure 4-21 where the maximum error between the two methodologies is still quite small at 10 6 but larger than previously encountered. 124

144 Table System properties for a system with upstream Poisson rate 4, downstream Poisson rate 6 System properties 1 Buffer size s t UP DP μ Y d upstream μ downstream E Table Real and complex roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real Complex i i i i i i i i i i i i i i i i i i 125

145 Table Comparison of steady-state performance measures Analytical solution Direct Markov chain solution Percent error Average service rate E-07 Average buffer level E-05 Probability of starvation 6.02E E E+02 Probability of blockage E-04 Figure Upstream and inverse downstream eigenvalues for a system with upstream Poisson rate 4, downstream Poisson rate 6 126

146 Figure Pareto chart of difference between the analytical and direct Markov chain solution methods Upstream Poisson rate 2.6, downstream Poisson rate 2.8 This example problem highlights a case where the consecutive applications of the secant method fail to identify one complex root and the minimization process explained in Section is the only option. System properties are shown in Table Upstream and downstream Poisson distribution means are closer compared to previous example problems, therefore the rates of adding or removing parts from the buffer are also closer. The downstream pole is again almost at zero. 127

147 Table System parameters for upstream Poisson rate 2.6, downstream Poisson rate 2.8 System properties Buffer size s t UP DP 1 Y d μ upstream μ downstream E Table Roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real Complex i i i i i i i i i i i i Table 4-20 shows real and complex roots of the simultaneous eigenvalues for this problem. After three applications of the secant complex roots solution process, the complex root at A = i remains unfound. It was empirically observed that if the function whose roots are being sought is redefined as a function of two variables, the real and imaginary parts, this function appears to have a trench-like shape close to or around a circle centered at zero with a radius equal to the upstream pole. This was observed for all examples with both machines approximating Poisson distributions at several eigenvalue pairs. It was further observed empirically that if a root is located inside or close to this trench-shaped edge, the secant 128

148 algorithm will fail in finding it. One example of this behavior is shown in Figure 4-22 for upstream Poisson rate 2.6 and downstream Poisson rate 2.8. This figure shows a contour plot of the difference between upstream and inverse downstream eigenvalues for an eigenvalue pair (in this example 5 th upstream eigenvalue and 3 rd inverse downstream eigenvalue in the sorted vector of eigenvalues with the sorting scheme that relies on the magnitude of the real component) defined as a function of real and imaginary parts of numbers. The contours are the function values and the horizontal and vertical axes represent real and imaginary parts, respectively. The root at A = i is also plotted as a point in the figure. Plotted in dashed lines is a circle with a radius equal to the upstream pole and centered at zero. It can be seen from the figure that in this example the root lies on a circular-shaped edge the contour of which approximately fits the circle with a radius of UP = centered at zero. In this example this trench-shaped contour was observed for all inverse downstream eigenvalues paired with the 5 th upstream eigenvalue, however for just one of the inverse downstream eigenvalues (3 rd in this case), the function has a root. This suggests that if the real and imaginary components of a potential root are taken as independent variables of a function and if the function is minimized for all eigenvalue pairs along the circular contour, the minimum point will be the root. In other example cases however, roots that were not found by the secant process were seen to lie inside the circle or slightly off the circle. Two examples are shown in Figure 4-23 and Figure Figure 4-23 is from an example problem with upstream Poisson rate 3.8 and downstream Poisson rate 5. It shows a contour plot of the difference between 6 th upstream and 5 th inverse downstream eigenvalue pairs as a function of real and imaginary numbers. This function has a root marked by a point marker on the plot. The contours of the function also follow a circle with 129

149 a radius equal to the upstream pole and a center at zero. Again, all inverse downstream eigenvalues paired with the 6 th upstream eigenvalue exhibit a similar behavior, but it is only the 5 th inverse downstream eigenvalue that leads to a root. The root, as seen in the plot, lies slightly off the circle marked by the upstream pole. Figure 4-24 is from an example problem with upstream Poisson rate 2.6 and downstream Poisson rate 5.2 for second upstream and 5 th inverse downstream eigenvalues. The circle with a radius equal to the upstream pole centered at zero is marked by a dashed line and the root is marked by a point marker. Here, the root is inside the circle, but on the trench-shaped contour. The secant algorithm failed to find this root. To cover all these possible situations a process was devised that first seeks a minimum along or slightly off a circle with a radius equal to the upstream pole centered at zero. This is done for all upstream and inverse downstream eigenvalue pairs, since, as before, there is no apriori information on which pair of eigenvalues leads to a root. The function is nonlinear and nonlinear constraints are imposed on minimization to keep the search space limited to two ellipses that straddle the circle marked by the upstream pole. To initiate the minimization, starting points are selected on the circle marked by the upstream pole. A vector of real components is chosen from divisions of the distance between 0 and upstream pole by 5. Imaginary components are chosen so that the starting points lie on the circle. The function is minimized for all eigenvalue pairs for these starting points and extraneous results discarded. This process finds the root at A = i for the problem highlighted in Figure 4-22, as well as the root for the problem of Figure If this minimization algorithm does not produce the root, this indicates that the root lies inside the circle marked by the upstream pole. Minimization is then carried out limiting the 130

150 search space inside the circle. To initiate the minimization, real components of starting points are selected from a vector of divisions of the upstream pole by 5. Imaginary components of starting points are similarly chosen from divisions of the upstream pole by 5. Both minimization procedures use the interior-penalty algorithm for constrained nonlinear minimization [47]. This process of minimization inside the circle marked by upstream pole finds the root for the problem shown in Figure Figure Contour plot of a function of real and imaginary components for 5 th upstream and 3 rd inverse downstream eigenvalues (upstream Poisson rate 2.6, downstream Poisson rate 2.8). 131

151 Figure Contour plot of a function of real and imaginary components for 6 th upstream and 5 th inverse downstream eigenvalues (upstream Poisson rate 3.8, downstream Poisson rate 5). Root is slightly off the circle. Figure Contour plot of a function of real and imaginary components for 2 nd upstream and 5 th inverse downstream eigenvalues (upstream Poisson rate 2.6, downstream Poisson rate 5.2). Root is inside the circle. 132

152 For the example problem of upstream Poisson rate 2.6 and downstream Poisson rate 2.8, Table 4-21 compares the steady-state performance measures found from the analytical solution and the direct Markov chain solution. It is seen that the system s average service rate is limited by the frequency of parts being removed from the buffer, reflected in μ downstream = The probabilities of starvation and blockage are both quite small due to the fact that rates of adding and removing parts from the buffer are close; in other words chances of not having a part available in the buffer or the buffer being full is small. Comparing the average buffer level to buffer size in Table 4-19, it is seen that the larger rate of adding parts to the buffer μ upstream>μdownstream makes for an average buffer level that is almost 75% full. Both Table 4-21 and Figure 4-25 show that the results from the analytical solution and the direct Markov chain solution are close with the Pareto chart putting the maximum difference at about However some accuracy is lost due to what the author believes is inaccurate direct Markov chain solution given the larger buffer size. Table Steady-state performance measures from analytical and direct Markov chain solutions Analytical solution Direct Markov chain solution Percent error Average service rate E-03 Average buffer level E-04 Probability of starvation 1.24E E E-01 Probability of blockage E

153 Figure Pareto chart of the difference between analytical and direct Markov chain solutions for a system with upstream Poisson rate 2.6 and downstream Poisson rate Wait and repair states with Poisson demand This is the system introduced in Section and displayed in Figure 4-3. The upstream machine represents a plant or production line that has a single operational state and a single repair state, as well as a wait state. The downstream machine represents demand for that plant modeled as an approximated Poisson random variable. System properties including upstream machine transition probabilities, as well as its efficiency are listed in Table Machine efficiency e upstream is at almost 0.77 while the rate of demand μ downstream is The roots to the simultaneous eigenvalue equations in this example were found after three applications of the secant algorithm with division of the solution space by 4. There are four real roots and six complex roots, listed in Table

154 The Pareto chart of Figure 4-26 shows the maximum errors between the analytical solution methodology proposed and the direct Markov chain solution for different system states across all buffer levels. As can be seen in this plot, the error is very small, on the order of which confirms the accuracy of the analytical solution. Table Properties for a manufacturing system with a wait/repair upstream machine and downstream Poisson demand System properties Buffer 1 s t UP DP e size Y d upstream μ downstream E state 1 state 2 p u 1 r u 1 p u 2 r u Table Roots of simultaneous eigenvalue equations for a system with wait/repair upstream and Poisson downstream Roots of simultaneous eigenvalue equations Real Complex i i i i i i 135

155 Figure Pareto chart of errors between the analytical and direct Markov chain methods Table 4-24 compares the performance measures for this example system as found through the analytical methodology and the direct Markov chain solution, as well as percentage errors for each measure. Given the isolated efficiency of the plant represented by the upstream machine and the demand rate of the downstream machine, on average the plant produces almost three times the rate of demand. This indicates that in steady-state system operation, the buffer will be full most of the time. This observation is consistent with average buffer level of almost 4.67 and the relatively large probability of blockage due to a full buffer at close to 70%. It also verifies the very small probability of starvation of the demand system because of a lack of inventory. As in several of the previous examples, average service rate is limited by the demand rate. In this case average service rate is almost equal to demand rate at 0.25 parts per time cycle. 136

156 Table Steady-state performance measures Analytical solution Direct Markov chain solution Percent error Average service rate E-12 Average buffer level E-11 Probability of starvation 7.62E E E-06 Probability of blockage E Different operational mean times with Poisson Demand This is a numerical example of the system shown in Figure 4-4. Table 4-25 summarizes the properties of the example Markov chain. The upstream machine has three non-operational states with different mean times to failure and mean times to repair. These are chosen in descending order for the three states to represent three different operational mean times. The downstream machine is modeled as a Poisson demand process with a demand rate of μ downstream = The upstream machine efficiency, as defined in equation 4.16, is found to be e upstream = The five real and four complex roots of the simultaneous eigenvalue equations are shown in Table These were found after three applications of the secant algorithm after dividing the search space by

157 Table Properties of a Markov chain system with an upstream machine with different operational mean times and a Poisson demand downstream Buffer 1 s t UP DP e size Y d upstream μ downstream E Upstream non-production states state 1 state 2 state 3 MTTF MTTR MTTF MTTR MTTF MTTR Table Roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real Complex i i i i Table 4-27 compares the performance measures found from the methodology described so far and those found using the direct solution of the Markov chain equations. Comparing upstream efficiency and downstream demand, it can be seen that in this example system s steady state, the upstream machine adds a part to the buffer in less than two time-cycles, while demand for a single part occurs every three time-cycles. This leads to a buffer that has parts arriving faster than they are removed. This explains the average buffer level at almost 85% of maximum capacity, but smaller than previous example system which had a larger difference between 138

158 efficiency and demand rate. The probability of blockage at almost 41% is consistent with the above discussion. The probability of starvation is small, but larger than the previous example in line with the smaller difference between efficiency and demand. Average service rate is again limited by downstream demand, not the availability of parts in the buffer. Table Performance measures found from analytical and direct Markov chain solution methods Analytical solution Direct Markov chain solution Percent error Average service rate E-11 Average buffer level E-11 Probability of starvation 2.31E E E-10 Probability of blockage E Mixed products with Poisson Demand This is an example of the system shown in Figure 4-5. The characteristics of this system as determined by the transition probabilities are summarized in Table The probabilities were set such that the system takes more than 6 time-cycles to produce a part while demand occurs almost every three cycles. The real and complex roots of the simultaneous eigenvalue equations are listed in Table The complex roots were found after two applications of the secant algorithm with a division of the space by

159 Table Roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real Complex i i i i Table Properties of system with mixed products upstream and Poisson downstream Buffer size s t UP DP 1 Y d μ downstream E Upstream machine (non-operational state u 1 ) MTTF MTTR Expected time to complete production 6.54 cycles The steady-state results for an example of this case are summarized in Figure 4-27 and Table Figure 4-27 is a Pareto chart of the maximum errors between the analytical solution and the direct Markov chain solution that confirms the accuracy of the solution to the order of Table 4-30 summarizes and compares performance measures between the two solutions. If the expected time to produce is compared with the downstream demand rate, it is seen that demand occurs faster than the upstream machine can produce, leading to a buffer that is mostly empty and a downstream machine that is mostly (more than 40% of the time) starved, as well as 140

160 an upstream machine that is very rarely blocked. It is also seen that the limiting factor in the service rate is no longer the downstream demand rate, but availability of parts in the buffer. Table Performance measures found from analytical and direct Markov chain solution methods Analytical solution Direct Markov chain solution Percent error Average service rate E-12 Average buffer level E-11 Probability of starvation E-11 Probability of blockage 3.27E E E-01 Figure Pareto chart of maximum errors between direct and analytical solutions for machine states at all buffer levels 141

161 4.4.6 Mixed products upstream and downstream The example here is of a system with both machines having the mixed products structure as shown in Figure It is intended to model a system where machines make products that require varying amounts of time to complete. Figure Markov chain model for a system with both machines representing mixed products production Table 4-31 summarizes system properties for an example with an upstream machine with five non-operational states and a downstream machine with six non-operational states. The downstream pole at 0 and the small Y d make for the very small and very large roots of the simultaneous eigenvalue equations shown in Table There are also two negative complex roots. All roots were found after one application of the secant root-finding algorithm without resorting to recursion. 142

162 Table Characteristics of example system with mixed products upstream & downstream Buffer size s t UP DP Y d E Upstream machine (non-operational state u 1 ) Downstream machine (non-operational state d 1 ) MTTF MTTR MTTF MTTR Expected time to complete production 6.54 cycles Expected time to complete production 8.08 cycles Table Real and complex roots of simultaneous eigenvalue equations Roots of simultaneous eigenvalue equations Real 6.16E Complex i e i i e i i i The Pareto chart of Figure 4-29 shows that maximum differences between the analytical solution and the direct Markov chain solution are in the order of 10 9 which shows that some accuracy has been lost due to the probabilities approaching the extremes of 0 and

163 Figure Maximum errors between direct and analytical solutions for machine states at all buffer levels In this example, the upstream machine adds a part to the buffer when it completes production and the downstream machine removes a part from the buffer upon production. The upstream machine has a smaller time to completion and a higher mean time to failure. This smaller time to completion means that parts are added to the buffer faster than the downstream machine can remove them. The higher mean time to failure indicates that the upstream machine is far more productive than the downstream machine, adding parts that the downstream machine cannot remove due to failure. This would indicate that the buffer is mostly full, confirmed from Table 4-33 which shows average buffer level at more than 90% of buffer size. By the same token, the probability of starvation is very small, while the probability of blockage is relatively high. It must be noted however that blockage is not only dependent on the buffer being full, but also on the upstream machine being operational. Since this is no longer a supply/demand system, the term average throughput is used and its rate indicates instances of a part exiting this two- 144

164 machine system which is a function of both a part being available in the buffer and the downstream machine completing its production cycle. The rather low throughput is an indication of the long time it takes for the downstream machine to complete a part, as well as its small mean time to failure. Table Comparison of performance measures Analytical solution Direct Markov chain solution Percent error Average throughput E-09 Average buffer level E-07 Probability of starvation 7.89E E E+01 Probability of blockage E Multiple sequential failures solution from general failures The solution to a two-machine Markov chain model with multiple sequential failures can be generated from the methodology of Section 4.3 by adjusting appropriate transition probabilities in the Markov chain. One example is shown in Table 4-34 where the performance measures of the example system in Section are found using the methodology of this chapter and the results are compared with the results of Table Comparing the results from the two methodologies, it is seen that they are compatible to a high degree of accuracy. 145

165 Table Comparison of performance measures from methodologies of general failures and sequential failures Sequential methodology (Chapter 3) General failures methodology (Chapter 4) Percent error Average throughput E-07 Average buffer level E-05 Probability of starvation E-04 Probability of blockage E Comparison of computer run time between exact and analytical solutions In order to demonstrate how buffer size affects the accuracy and computer run time required for a solution, Figure 4-30 shows the computer run-time required for exact and analytical solution for the example problem of Table 4-2. It is seen in the figure that up to a buffer size of 140, the exact solution run time keeps increasing with buffer size. Beyond this point, the computer used for the analysis was unable to find the exact solution due to memory capacity limitation. The analytical solution on the other hand could easily be used for any buffer size and the computer run-time was almost identical for all buffer sizes. The variance seen in the run time for exact solution is due to the effects of disruptions in the computer programming necessary to generate the data in reasonable time. 146

166 Figure Comparison of computer run times for exact and analytical solutions as a function of buffer sizes Effect of buffer capacity on buffer level and service rate This section will demonstrate for a simple supply and demand system the effects of buffer capacity on steady-state buffer levels and service rates. The system is a simplification of the example in Section with a single non-operational state and no wait state. In other words, the upstream machine is a simple operational/failed system and the downstream machine is a Poisson demand. System properties are shown in Table 4-35 where machine efficiency outperforms demand by a large margin. Figure 4-31 and Figure 4-32 show average service rate and average inventory (buffer) level as functions of maximum buffer capacity. Figure 4-31 reaffirms previously-reported behavior [4] and [15] of other two-machine Markov chain systems where beyond a certain level, increasing buffer capacity does not lead to improvement in production rate. It also reaffirms that 147

167 in this example due to the rather large disparity of supply and demand the limiting factor that determines maximum service rate is the rate of demand. Parts are available in the buffer most of the time, which leads to a reduced role for buffer capacity in the determination of the average service rate. This is seen in Figure 4-31 with the system reaching its maximum average service rate at a small buffer capacity and maintaining that service rate for higher buffer capacities. Figure 4-32 shows average buffer level remains almost constant and almost equal to the maximum buffer capacity. This is attributed again to the almost constant average service rate and the fact that the buffer is mostly full, as seen from the disparity of the upstream efficiency and downstream demand. Table System properties of an up/down machine with Poisson demand and varying buffer capacity System properties s t e upstream μ downstream

168 Figure Average service rate as a function of inventory capacity for a system with a simple up and down machine upstream and Poisson demand downstream Figure Average inventory content as a function of inventory capacity 149

169 4.5 Concluding remarks This chapter presented a solution for the steady-state probabilities and performance measures of a two-machine Markov chain system with multiple failures and general arbitrary transitions between the failed states. The solution was developed as a more suitable alternative to a direct numerical solution of the linear system of the Markov chain steady-state equations. This alternative solution is independent of the size of the state-space and is much faster to implement in recursive algorithms that deal with multiple machines. The solution was developed in terms of a solution to a simultaneous pair of eigenvalue equations with both real and complex roots. Procedures were developed for identification of both in different degrees of complexity. The solution was illustrated through several examples that represent practical manufacturing scenarios including a system that represents supply and demand. Analysis was carried out where appropriate to gain insights into the steady-state performance behavior of the systems modeled. In the next chapter, the supply and demand model developed here is used to implement a control strategy that finds the rate of production to meet demand. The Markov chain structure is taken advantage of to study the transient behavior of the system and show how control action results in improvement of transient behavior. 150

170 Chapter Five: Application of Model Predictive Control to a Markov-Chain Based Stochastic Transient Manufacturing Model This chapter uses the supply demand model of Chapter four with the wait/repair upstream machine and the Poisson approximating downstream machine to demonstrate the development of a control strategy for the rate of production to meet demand. The probabilities of entering and remaining in the wait state are interpreted as control variables adjusting production rate to satisfy demand at lowest cost. The production control system is based on discrete time linear model predictive control applied to a Markov chain manufacturing system. The main reason for this approach is shown as the ability to extract transient information from the Markov chain. The control model is applied in discrete time, therefore continuous time controls based on differential equations cannot be applied to this model. On the other hand, traditional discrete time control applications were also found to be insufficient in dealing with the expectations that are generated from the stochastic Markov chain model. An alternative method was found in model predictive control which is both applicable in discrete time and fits the future expectations of the Markov chain model into control predictions. The following sections introduce a basic model predictive control and explain how concepts adopted from this control approach can be applied to a Markov chain manufacturing system model. Numerical examples are provided to demonstrate results from controlling to predetermined set points. Concepts from control systems and production systems performance are used to quantify and characterize transient properties. Studies are designed to show the effect of control on transient behavior. It is shown that control action results in significant improvement of transient metrics. Further studies are shown to signify how choices manufacturing policies can affect transient behavior. 151

171 5.1 Basic model predictive control formulation The control methodology is built on the Markov chain model of the previous chapter. The control outputs from the Markov chain model are expected performance measures as the Markov chain system evolves through time. A control approach was therefore required that is applicable in discrete time and is based on predictions of system behavior to reflect the expected nature of the performance measures from the Markov model. Due mainly to its ability to anticipate future system behavior in order to take control action, predictive control with constraints [48] was found to be particularly suited to the application intended here; controlling a Markov chain manufacturing system as it evolves toward steady state while allowing the study of transient behavior. This section briefly introduces the basics of model predictive control. Model predictive control is an advanced control concept which originated in the control of chemical industries. It relies on a state-space model of the system and generates controls based on the history of the system and a cost function that is optimized over a length of time in the future called known as prediction horizon. The basis of model predictive control is the idea of receding horizon [49]. In discrete time, this idea can be explained as follows. At any point in time, denoted k, the control system output is denoted as y(k). The control system specifies a set point trajectory s(k) that the output should ideally follow. As the system output deviates from the set point, a reference trajectory, r(t), is defined which starts at the current output and defines a trajectory along which future outputs should follow to return to the set point trajectory when the current output has deviated from the set point. The general assumption, also adopted in this work, is that the reference trajectory approaches the set point exponentially over the prediction horizon [48]. The concept is shown in Figure 5-1 where the variation of control input is also shown. Error at any time step is denoted ε(k) as shown in equation 5.1. Error i steps later ε(k + 152

172 i) is shown to decrease exponentially, in equation 5.2 and Figure 5-1, with the ratio of sampling interval T s to exponential time constant T ref. The reference trajectory is then referred to as r(k + i k) since its values at the next i time steps depend on the value at the current time k. This is shown mathematically in equation 5.3. ε(k) = s(k) y(k) 5.1 ε(k + i) = e i T s T ref ε(k) 5.2 r(k + i k) = s(k + i) ε(k + i) = s(k + i) e i T s T ref ε(k) 5.3 Figure 5-1- A simple illustration of model predictive control concept [48] 153

173 As the name implies, a model predictive control model uses an internal model to predict the behavior of the system over the prediction horizon. Since system behavior is dependent on its inputs over the horizon, the predictive control model attempts to select inputs that promise the best overall system behavior, as will be defined shortly. These inputs are denoted u (k + i k); i = 0,1,, H u where H u is the control horizon. Here and throughout the rest of this development process, the caret symbol ( ) is used to indicate predictions and the conditional symbol,, is used to describe a prediction variable whose value depends on the last known value. It indicates that at time k, there are only indications of what the inputs will be and the actual values of these inputs may be different from the predicted values. In the simple case, the input trajectory is chosen to bring the system output at the end of the prediction horizon k + H p to the required value, which was defined in equation 5.3 to be r(k + H p k). The common strategy is to assume the control input varies over a smaller number of time steps than the prediction horizon and then remains constant afterward. This is referred to as the control horizon and denoted H u, as shown in Figure 5-1. This means that the control system determines u (k + i k); i = 0,1,, H u and the remainder of the control inputs remain constant: u (k k) u (k + 1 k) u (k + H u 1 k); u (k + H u 1 k) = u (k + H u k) = = u (k + H p k) 5.4 The best system behavior is defined as one that minimizes the error between the modelpredicted outputs and the reference trajectory. This leads to an optimization problem that seeks to find a control input trajectory which minimize this future predicted error. In this implementation, once the control input trajectory over the course of the control horizon is found, only the control input for the first time step of the prediction horizon u (k k) is applied to the 154

174 system. Then the entire cycle of measuring the output and redefining the reference and finding optimal control input trajectories are repeated for one time step later, k + 1. The basic formulation of predictive control which is used here assumes a discrete-time linear model in terms of the state-space equations 5.5a to 5.5c where x is the state vector, u is the input vector, y is the vector of measured outputs and z is the vector of controlled outputs. x(k + 1) = Ax(k) + Bu(k) 5.5a y(k) = C y x(k) 5.5b z(k) = C z x(k) 5.5c This basic formulation also assumes that the cost function is quadratic, linear constraints exist on controls u and outputs z and the system is time-invariant. 5.2 Applying model predictive control to a Markov chain-based manufacturing model In this section the manufacturing system is developed as a Markov chain to which model predictive control can be applied. Using the Markov chain model, state-space and cost function equations are developed as required for the control methodology. The expected production rate is used as a control variable and its modeling as a transition probability in the Markov chain is explained The Markov chain model Figure 5-2 is a reproduction of the wait/repair state with Poisson-approximated demand model whose steady-state behavior was studied in Chapter 4, with a slight difference in notation and a major difference in concept. The notational difference is in the representation of the wait and repair states, as well as the probabilities of entering these states. The conceptual difference is 155

175 that this system is not static anymore; the probabilities of entering and remaining in the wait state are controlled to change at every time-cycle in order to affect the rate of production. As before, the model consists of a machine-buffer-machine system that is interpreted as the upstream machine representing supply, the downstream machine representing demand, and the buffer representing a limited-size finished goods inventory. The upstream machine has two non-production states: one is defined as a repair state, here denoted with R, that is entered randomly due to production supply breakdown, the other is a wait state, denoted as W, that is entered due to control action, and a single production state, denoted with 1. Therefore, the upstream machine is a manufacturing plant that is subject to random failures into the repair state and controlled non-production cycles in the wait state. The probabilities of entering and leaving the repair state, p u and r u, are related to the rates of failure and repair of the machine. The wait state is part of the overall control strategy and represents a state where production is deliberately halted. The probabilities of entering and remaining in the wait state, u 1 and u 2, respectively, are the system control variables. By controlling these probabilities, the rate of production is controlled. 156

176 Figure 5-2- Markov chain model of a system with wait and repair machine upstream and Poisson demand downstream In practice, the wait state can be implemented using a computer-generated random number when the system is in a production state and comparing this random number to the probability of entering the wait state determined from the controller. If the random number is less than or equal to the control probability of entering the wait state, production is paused, and the system enters the wait state. Otherwise, the system continues operating. At each time step the system is in the wait state, another random number is generated and compared to the control probability of remaining in the wait state. If the random number is less than or equal to the control probability of remaining in the wait state, the system remains in the wait mode, otherwise production is resumed. The control objective is to adjust these control variables over time to achieve and maintain targets for outputs inventory or service rates as shown below while minimizing the costs associated with deviation from targets and the cost of production. 157

177 The downstream machine has the same structure as was developed in the previous chapter. The production state represents the occurrence of demand and the non-production states represent time-cycles when demand does not occur. Transition probabilities are determined to approximate the number of cycles between two occurrences of demand as a Poisson random variable. The buffer represents a finished goods inventory. If the upstream machine is operational, it adds a part to the buffer, representing a finished product being added to the finished goods inventory. If the downstream machine is operational in this cycle, it takes a part from the buffer, representing demand occurring. If the downstream machine is in any of the sequential nonoperational states, there is no demand and the finished product accumulates in the buffer. If the upstream machine is in either the wait or repair state, no part is added to the buffer, i.e., no product is added to the inventory. The same assumptions from Chapter 3 and Chapter 4 are applied here. The Markov chain model is discrete time and discrete state and machines are synchronous. Assumptions for analytical tractability including operation-dependent failures and the timing of changes in machine states and buffer are identical to those made earlier and not repeated here. A minor difference is what was termed in previous chapters as starvation, which was defined as the buffer being empty and the downstream machine operational, is now referred to as demand loss, since it represents demand occurring and not being met due to an empty finished goods inventory Expected number of visits to Markov chain states This section details the Markov chain transition equations for the supply-inventorydemand model represented in Figure 5-2. The Markov chain transition equations are very similar to the transition equations from Chapter 4. The only difference is in notation and in the grouping 158

178 of downstream failed states. Transition equations are listed in Appendix H in order to not only represent the notational difference, but more importantly to develop the state-space of the control system. The transition matrix T is the basis of the state-space model developed here. It consists of the coefficients of the transition equations in Table H-2, H-3 and H-4 in Appendix H. As done previously in both Chapter 3 and Chapter 4, the notation adopted in this thesis has the transition matrix elements representing the probability of transition from a column state to a row state. Therefore, if the state probabilities at any time step k are arranged in a column vector P k as represented in equation 5.6a, equation 5.6b summarizes the one-step state transition equations. Also, it can be easily shown that with this notation, the total number of visits to the Markov chain states after any number of time steps is the summation of the powers of the transition matrix T up to that time-step. This is the foundation of the state-space equations of Section

179 P k = [ p(0, W, 1) p(0, R, 1) p(1, W, d j ) p(1, R, d j ) p(1,1,1) p(1, W, 1) p(1, R, 1) p(n, 1, d j ) p(n, W, d j ) p(n, R, d j ) p(n, 1,1) p(n, W, 1) p(n, R, 1) p(n 1,1, d j ) p(n 1, W, d j ) p(n 1, R, d j ) p(n 1,1,1) p(n, 1, d j ) ] 5.6a P k+1 = TP k 5.6b In order to develop the state space of the control system, it is noted that the total number of visits to any of the states of the Markov chain of Figure 5-2 after k time steps can be represented in matrix form. This matrix is denoted as X (k). Similar to the organization of the transition matrix itself, each row and column element in X (k) represents the number of visits to a state corresponding with the row given that the chain started in a state corresponding to the column. Therefore every column in X (k) is the total number of visits to all the states in 5.6a given that the chain started in the state associated with that column. X (k) is the summation of powers up to k of the Markov chain s transition matrix T, as summarized in equations 5.7. The expected number of visits is denoted X(k) and is found by averaging X (k). 160

180 X (1) = T 5.7a X (2) = T + T 2 5.7b X (k) = T + T T k 5.7c It can be seen that the total number of visits to the states of the Markov chain is iterative. Given this fact, equations 5.7a to 5.7c may be rewritten in terms of the expected number of visits to give equations 5.8a to 5.8d. X(1) = T 5.8a X(2) = 1 2 [X(1) + T2 ] 5.8b X(3) = 1 2 [X(2) + T3 ] 5.8c X(k) = 1 2 [X(k 1) + Tk ] 5.8d Basic model predictive control formulation The iterative equations of 5.8 serve as a foundation for the following development of a control system that retains enough information about system transients and allows production decisions to be made on that basis. 161

181 The state-space of the production control model is similar to equations , but based on the expected number of visits, as shown in equations 5.9a and 5.9b with bold-faced and upper-case letters denoting matrices and bold-faced lower-case letters indicating vectors. X(k + 1) = AX(k) + BU(k) 5.9a z(k) = C z X(k) 5.9b X is the matrix of the expected number of visits to the states in the Markov chain of system states. U is a matrix representing the control variables. It is found by decomposing the transition matrix and arranging the control variables u 1 and u 2 of Figure 5-2 as shown below. z is a vector of outputs to be controlled. It is a row vector that can represent either expected buffer (inventory) levels or expected service rates based on the values of elements in the matrix C z. In both interpretations, each column in z corresponds to a state of the Markov chain where the Markov chain started. Expected service rate as defined in the previous chapter in the context of a supply and demand system is the probability of demand occurring and being fulfilled. It is found by setting some elements in C z equal to 1 and having the rest of elements equal to zero. The elements that are equal to 1 correspond to Markov chain states with an operational downstream machine and buffer that is not empty, p(n,,1); n > 0. Expected buffer level is found by setting elements in C z equal to the buffer level of the Markov chain state they represent; 0 for p(0, ), 1 for p(1, ), etc. Starting at time step k, it is assumed that information exists of the state of the system up to time k 1. This includes the state X(k 1), the output z(k 1), control variables u 1 and u 2 and the transition matrix T(k 1). Equations 5.8a to 5.8d are iterated for the length of the 162

182 prediction horizon H p from the last known state X(k 1) to find state prediction equations For any time after (k 1), the system states are unknown and predicted states are denoted as X (k k 1),, X (k + H p 1 k 1). It is important to note that so far, the control variables u 1 and u 2 are imbedded in the transition matrix T. Therefore at any time-step in the future as u 1 and u 2 change according to the prediction formulation, the transition matrix changes as well. As a result, at any time step k, the last known transition matrix is T(k 1) and future predictive transition matrices are indicated as T (k k 1). This changing transition matrix also requires that the history of these changes up to time step k 1 is accounted for in the predictive state formulation. Therefore, the T k term in equation 5.8d has been re-written as the multiplication of all the known previous transition matrices with the first future predicted transition matrix T (k k 1)T(k 1) T(1). The sequence of multiplications of transition matrices is reversed in time to be compatible with the transition matrix probabilities of column state to row state transition. X (k k 1) = 1 2 X(k 1) T (k k 1)T(k 1) T(1) 5.10a X (k + 1 k 1) = 1 2 X (k k 1) T 2 (k k 1)T(k 1) T(1) 5.10b X (k + H p 1 k 1) = 1 2 X (k + H p 2 k 1) T Hp(k k 1)T(k 1) T(1) 5.10c Sequential substitutions are made for the left hand sides of equation 5.10, resulting in 5.11a to 5.11c. 163

183 X (k k 1) = 1 2 X(k 1) T (k k 1) T(j) 1 j=k 1 1 X (k + 1 k 1) = 1 4 X(k 1) + [1 4 T (k k 1) T 2 (k k 1)] T(j) j=k a 5.11b X (k + H p 1 k 1) = 1 2 H X(k 1) p + [ 1 2 H p T (k k 1) H p 1 T 2 (k k 1) c T Hp(k k 1)] T(j) 1 j=k 1 The goal of the predictive model at any time step k is to find the best control input u 1 (k), u 2 (k) from the minimization of anticipated future costs associated with the application of that control input. The state space formulation of the basic model predictive control explicitly includes control variables. Comparably, it is noted that the structure of the transition matrix T developed from transition equations is such that control probabilities u 1 and u 2 represent transitions to and from the same state and are therefore never multiplied together in the transition equations. This means that it is possible with some straight-forward, if tedious, manipulation to decompose the Markov chain transition matrix T into matrices containing constants and control variable coefficients as in equation T = B 0 + B 1 U

184 B 0 is a matrix that represents the constant component of the transition matrix and B 1 is a matrix containing the coefficients of u 1 and u 2 control variables. U is a diagonal matrix with control variables u 1 and u 2 occupying some of the diagonal elements based on the transition equations. The off-diagonal elements are zero. Based on this decomposition and noting that the powers of the transition matrix in equation 5.11 represent future predictive states associated with predictive control inputs, these powers are written in terms of B 0, B 1 and predictive control variables as: T (k k 1) = B 0 + B 1 U (k k 1) 5.13a T 2 (k k 1) = [B 0 + B 1 U (k + 1 k 1)][B 0 + B 1 U (k k 1)] 5.13b T Hp(k k 1) = [B 0 + B 1 U (k + H p 1 k 1)] [B 0 + B 1 U (k k 1)] 5.13c U (k k 1), U (k + 1 k 1),, U (k + H p 1 k 1) contain predictive control variables u 1(k k 1) u 2 (k k 1) ; ; u 1(k + H p 1 k 1) put into matrix form. As indicated previously, u 2 (k + H p 1 k 1) the basic predictive control formulation allows the control horizon H u to be different from the prediction horizon H p. To keep the equations simple, it is assumed for the moment that these two are the same. Once the equations are fully developed, it will be shown that it is easy to allow the control horizon and prediction horizon to be different. Instead of determining control variables directly, the standard predictive control approach determines the changes to be made in the control variables from the last known value. These are denoted U (k + i k), i = 1,, H p. Incorporating this change and linearizing equations that 165

185 contain multiplications of control matrices, the final state-space equations in matrix-vector format are found. The derivation process is detailed in Appendix I. In matrix-vector form, these predictive state equations are summarized in equation X (k k 1) X (k + 1 k 1) [ X (k + H p 1 k 1)] 1 X(k 1) = X(k 1) 4 + BM 1 T(j) j=k 1 1 [ 2 H X(k 1) p ] 5.14a + BM 2 U (k k 1) T(j) U (k + H p 1 k 1) T(j) [ 1 j=k 1 1 U (k + 1 k 1) T(j) j=k 1 1 j=k 1 ] where BM 1 = [ ( B ( 1 4 B B2 ) B + 1 B H p 2 H p 1 2 BH p) ] 5.14b BM 2 is a large matrix of coefficients of U (k k 1),, U (k + H p 1 k 1) and B = B 0 + B 1 U(k 1) = T(k 1) is the transition matrix with the last known matrix of control variables at time step k. By having elements of the matrix containing the U only up to H u H p 166

186 and setting the rest of the elements equal to zero, a control horizon can be implemented that is smaller than the prediction horizon Output equations The representation of the control system state in terms of the total number of visits to the Markov chain states allows the control system output to be defined as either the expected buffer (inventory) level or the expected service rate using an appropriate transformation matrix C z. Iterating equation 5.9b for the length of the prediction horizon, the predictive output is formulated as equation z (k k 1) X (k k 1) z (k + 1 k 1) X (k + 1 k 1) Z(k) = = C zz [ z (k + H p 1 k 1)] [ X (k + H p 1 k 1)] 5.15a C z C C zz = [ z 0 ] 5.15b 0 0 C z Substituting the matrix-vector formulation of predictive states from equation 5.14a, output equations are found in vector-matrix format of equation Z(k) = C zz IX(k 1) + C zz BM 1 1 T(j) j=k 1 U (k k 1) 1 U (k + 1 k 1) + C zz BM 2 T(j) j=k 1 [ U (k + H p 1 k 1)] 5.16 where I represents an appropriately-sized identity matrix. The variables are renamed for simplification in equations

187 Z(k) = ΨX(k 1) + Υ 0 + ΘΔU (k) 5.17a Ψ = C zz I 5.17b Υ 0 = C zz BM 1 T(j) 1 j=k c Θ = C zz BM d U (k k 1) U (k + 1 k 1) ΔU(k) = [ U (k + H p 1 k 1)] U (k k 1) 1 ΔU (k) U (k + 1 k 1) = T(j) j=k 1 [ U (k + H p 1 k 1)] 5.17e 5.17f The free response of the system is defined as the output when no change is made to the inputs over the prediction horizon. The tracking error ℇ(k) is the error between this free response and the reference trajectory defined earlier. These are related in equation E(k) = T(k) ΨX(k 1) Υ a r(k k 1) r(k + 1 k 1) T(k) = [ r(k + H p 1 k 1)] 5.18b Cost function The cost function developed in this implementation of predictive control is quadratic in form and penalizes deviation of the predicted outputs z (k 1 + i k 1); i = 1,, H p from the reference trajectory r(k 1 + i k 1); i = 1,, H p over the entire prediction horizon. This may be interpreted as penalizing for overproduction if the output is buffer level, or not meeting 168

188 demand if the output is service rate. The cost function also includes a term for the cost of control changes U (k 1 + i k 1); i = 1,, H u that occur across the entire control horizon. Formulating costs as quadratic functions permits weighting the elements of the output or control changes differently. These are represented by weighting matrices Q(i) and R(i) for the output and controls, respectively. The cost function therefore is written out as equation H p 2 V(k) = z (k 1 + i k 1) r(k 1 + i k 1) Q(i) i=h w H u + U (k 1 + i k 1) R(i) i= Where x 2 Q = x T Qx represents the quadratic form. The choice exists to penalize deviations of the output from the reference trajectory not since the beginning, but starting from time H w, called penalty horizon, as represented by the summation range H w i H p in equation It is also possible to set some values in the weighting matrix Q(i) to zero in order to have control on deviations that are being penalized. In matrix-vector format, the cost function equation can be written as: V(k) = Z(k) T(k) 2 2 Q + ΔU(k) R 5.20 Expanding out and simplifying, the final cost function is formulated in equation V(k) = ΔU (k) T Θ T QΘΔU (k) ΔU (k) T Θ T QE(k) E(k) T QΘΔU (k) + E(k) T QE(k) + ΔU(k) T RΔU(k) 5.21 V(k) is a matrix in which each column represents a cost function vector if the Markov chain system started in the state associated with that column. A scalar value for V(k) is required for the objective function of numerical optimization algorithms. It was found empirically that the 169

189 Euclidean norm of the cost function vector is the most appropriate choice for assigning a cost value to the columns of the matrix Constraints The only constraints present in the model are linear constraints on the control variables u 1 and u 2, since they represent probabilities and are therefore constrained between 0 and 1. Since u 1 is the probability of transition from the production state where transition is possible both to the wait state and the repair state, the constraint must also take the summation of the probabilities into account. Also, as the predictive control formulation has been written in terms of the changes in control variables, constraints must be also be expressed on the changes of control variables. The constraints on the vector of predictive controls are found in the form of equations 5.22 which show the upper limits constraints. It must be noted that these constraints are applied directly to the vector containing the predicted control variables u(k) which is separate from the matrix ΔU(k). This is due to the computer programming implementation that finds a vector of predicted control variables and populates matrices and vectors automatically as explained in the next section. F u(k) F 1 u(k 1) f 5.22a u 1(k k 1) u 2(k k 1) u(k) = u 1(k + H u 1 k 1) [ u 2(k + H u 1 k 1)] 5.22b u(k 1) = [ u 1(k 1) u 2 (k 1) ] 5.22c 170

190 F = 1 0 ; F 1 = [ 0 1] [ ] 1 + p u ; f = [ 1 ] 5.22d The two matrices F and F 1 and the vector f are constant and found from expansion of the equations for control variables. The vector of last known control variables, u(k 1), changes at every time step. A similar procedure finds lower limits on the changes in control variables. Unlike the basic model predictive control application, the implementation here does not constrain state or output variables. 5.3 Solution process The objective of the solution process is to find the control variables u 1 and u 2 such that the cost function is minimized over the prediction horizon. The control variables are not found directly; what results from the minimization is the vector of changes in the control variables over the control horizon u(k). From this vector of changes in control variables, the elements immediately applicable are those associated with the next time step, [ u 1(k k 1) ]. The rest of u 2(k k 1) the elements are discarded. The cost function as formulated in equation 5.21 deals with predictive vector-matrix products ΔU (k) and ΔU(k) which are different from u(k). This is a matter of computer programming implementation that populates the matrices ΔU (k) and ΔU(k) using values from u(k). Once determined, the optimal changes in control variables [ u 1(k k 1) ] are renamed u 2(k k 1) [ u 1(k) ] since they are no longer predicted control variables, but actual values that are applied u 2 (k) 171

191 to the model at time k. This results in [ u 1(k) ]. Now, the transition matrix T is updated using u 2 (k) these control variables. The control system state is updated from equation 5.8d and renamed X(k) since it is no longer a prediction. The output is found from 5.15a and renamed to z(k)to indicate that it is no longer a prediction. At this point, the system state is a matrix where each column represents the expected number of visits to any of the Markov chain states given that the system started in the state associated with that column. The output, depending on the choice of C z coefficients, is a row vector in which each element represents either the expected buffer (inventory) level or service rate given that the system started in a state associated with that element. This representation of variables is kept uniform throughout the control process. Whenever it is required to determine output, relevant values are extracted from the vectors with appropriate assumptions regarding the starting vector of the Markov chain. This is illustrated in the following section with numerical examples. At the next time step, a new vector of optimal control variables over the (shifted) predictive time horizon is calculated by minimizing the cost function and the cycle of updating the transition matrix, state and output is repeated. Figure 5-3 shows a simplified flowchart of the computations at each time step. 172

192 Figure 5-3- Simplified flowchart of predictive control calculation process 5.4 Numerical results In this section, results from the application of predictive control to the Markov chain of the production inventory demand model are demonstrated. Through examples, it is first shown how predictive control can be used to achieve target levels for inventory and service rate. Then, taking advantage of the Markov chain structure, it is shown that control action leads to major improvements in system transient behavior compared to an uncontrolled Markov chain system. Finally transient behavior is explored in more detail to offer some practical information on manufacturing choices that control the transients. 173

193 5.4.1 Control to set points Using the control model developed here, a production system can be controlled to predetermined performance targets for (expected) buffer (inventory) levels and service rates. The main rationale for controlling inventory level is to maintain a level of safety stock to mitigate the effects of stock out. This is standard practice in many make-to-stock manufacturing systems. Throughout this section, expected buffer (inventory) levels are designated as n. Service rate characterizes the rate of demand satisfaction and is conceptually similar to service level in supply chain and inventory management systems. It has been defined here as the total probability over time that demand occurs and there is inventory available to fulfill the demand. It is designated throughout these examples as ς. Both expected inventory level and service rate are found from the matrix of states representing the expected number of visits to the Markov chain states by setting appropriate values in the transformation matrix C z, as detailed in Section Demand rate, as defined in the previous chapter in the context of Poisson approximating machines and found from the Poisson rate is designated μ to be compatible with the rest of this work. Since only the downstream machine represents a Poisson distribution, λ downstream and μ downstream of the previous chapter are now simply called λ and μ. Table 5-1 summarizes input parameters for an example problem of controlling expected inventory level and service rate to a predetermined set-point. Both examples use the same upstream supply system and have the same buffer size. The downstream demand is different in order to extract reasonable rates from the system. When controlling inventory level, the objective is to control the rate of entering and remaining in the wait state in order to reach and maintain a defined inventory level of 7. This is akin to adjusting production rates to maintain fixed 174

194 inventory levels in the face of demand or setting a virtual maximum buffer size. The repair and failure probabilities r u, p u characterize the stochastic nature of the machine. Prediction and control horizons H p, H u summarize the characteristics of predictive control. These are assumed to be equal, therefore control variables are found throughout the prediction horizon. H w = 1 indicates that output deviations are penalized from the very first prediction step. System control is initiated from a Markov chain system state representing an empty buffer in the upstream wait state and downstream in a state where demand occurs. In the notation of this thesis, this state is (0, W, 1). The initial control variables are set to zero, i.e., the system would immediately exit the wait state without control intervention. Table 5-1- Input values and parameters for a buffer and service rate control problems Controlled output (expected values) Buffer Content (n) Service rate (ς) Failure probability (p u ) Markov Repair probability (r u ) model Poisson parameter (λ) properties Demand rate (μ) Buffer size (N) Predictive Prediction horizon (H p ) 4 4 control Control horizon (H u ) 4 4 parameters Penalty horizon (H w ) 1 1 Control set point Set points and initial control Initial state Empty buffer, wait state, demand state (0, W, 1) Empty buffer, wait state, demand state (0, W, 1) conditions Initial control values (u 0 ) [ 0 0 ] [0 0 ] 175

195 Figure 5-4 shows the change in expected inventory level as the buffer accumulates parts toward its set point of 70% of buffer size. The change in the control variables during this time is shown in Figure 5-5. In this example, after approaching the set point, the controller sets the probability of remaining in the wait state, u 2, to approximately 0.9, indicating a large amount of time spent in the wait state. At the same time, the probability of leaving the production state and entering the wait state, u 1, is approximately 0.3, meaning the system mostly remains in the production state. This can be explained by the relatively low demand rate which allows the production system to remain waiting for longer times once the target buffer content has been achieved. Figure 5-4- Controlling system of Table 5-1 for an inventory level of 7 176

196 Figure 5-5- Development of control variables when controlling expected inventory Figure 5-6 and Figure 5-7 show the change in expected service rate and the control variables over time when the system of Table 5-1 is controlled to a service rate of Because the definition of service rate includes both the occurrence of demand and the existence of parts in inventory, service rates achievable are dependent on the demand rate of the Poisson model. The relatively high service rate set point of Table 5-1 is consistent with the relatively large demand rate of this system. As seen in Figure 5-6, system achieves the target service rate in less than 10 time steps. As is evident from the high u 2 probability in Figure 5-7, once the system enters the wait state, the controller forces it to spend a long time in the wait state since current inventory levels are sufficient to satisfy demand. On the other hand, the small u 1 probability indicates that once the system enters the production state, it spends a long time in that state, replenishing stock in the inventory. 177

197 Figure 5-6- Controlling service rate to 0.65 Figure 5-7- Development of control variables when controlling service rate 178

198 5.4.2 Distribution of outputs based on starting states Output levels and their progression toward the set point depends on the starting state of the Markov chain model. Both examples of Table 5-1 started in with an empty buffer in a wait state with demand occurring. If the system stars in a different state, somewhat different behaviours result from the control application, as shown in Figure 5-8 where buffer progress toward set point is shown at three different starting states: state with empty buffer in a wait state and demand occurring, (0, W, 1), identical to Figure 5-4, state with buffer half full in a wait state and demand occurring (5, W, 1) and a state with a full buffer and no demand (10,1, d 1 ). The starting values for buffer levels reflect these starting states and the extent of control is shown by the fact that these two starting states take longer to reach steady-state. Figure 5-8- Buffer level progress toward set point with three different starting states 179

199 Figure 5-9 shows the distribution of buffer level across all starting states at time step 15 of Figure 5-8. It is seen that at starting states with larger buffer levels buffer levels are mostly concentrated toward the maximum buffer capacities, confirming the information in Figure 5-8 and indicating that control is most effective when starting buffer level is small. Figure 5-9- Distribution of buffer levels based on starting states at time step t = 15. Target buffer level is 7. Similar plots are shown for the problem of controlling service rate to the settings of Table 5-1. Figure 5-10 shows service rates for three starting buffer levels as the output progresses toward its set point. Figure 5-11 shows the distribution of service rates across all starting states. It is seen from the figure that at larger starting buffer levels, the output is larger than the target level, indicating that control is most effective when starting buffer level is small. 180

200 Figure Service rate progress toward set point at three different starting states Figure Distribution of service rates based on starting states at time step t = 15. Target service rate is

201 5.4.3 Transient properties The need to study transient properties of manufacturing systems, in both controlled and uncontrolled systems has been pointed out by several authors, including [2] and [41]. It is clear from Figure 5-4 to Figure 5-7 that the control model proposed is particularly apt for studying transient properties. Time-related transient characteristics can be extracted from the development of output over time as shown in Figure 5-6. Performance-related transient characteristics can be extracted directly from the Markov chain transition matrix as it evolves and changes through time. These are explained more clearly through the examples below. Borrowing from the control systems and manufacturing systems performance literature, three metrics are introduced for characterizing transient behavior. It is demonstrated that control action results in improvements in all three when compared to equivalent uncontrolled systems. First, similar to the work conducted in [41] in the analysis of the transient behavior of manufacturing systems modeled as Bernoulli machines, the concept of settling time of output is introduced as a means to determine the time it takes for the output to reach and remain in the vicinity of its set point. It is shown that when controlling inventory level or service rate, control results in marked improvements in settling time. Second, the concepts of demand loss and average demand loss are defined during transience to quantify the total and expected amount of loss incurred when customer demand is present and there is no inventory to deliver. It is shown that control action can result in a smaller loss of customer demand when compared to an equivalent uncontrolled system. Finally, in order to characterize deviation from set point when controlling service rate, the concept of service error is introduced and it is demonstrated that control action leads to system performance improvement. Also the effects of manufacturing 182

202 system properties including buffer size, demand rate and efficiency on these metrics are explored in order to demonstrate how transient behavior can be controlled by setting these properties Settling time Borrowing from control systems literature [50], settling time is the time it takes for the output to reach and remain within a tolerance (usually, ±5%) of its steady state value, given a unit step function input and zero initial conditions. In the manufacturing system application here, a zero initial condition may be represented by an empty buffer. A step input is applied in the form of a constant set point, but the unity assumption is relaxed so that a variety of set point conditions can be explored. A tighter tolerance band of ±1% is also used which provides a more discernible time frame for the range of outputs levels explored. The effectiveness of control to improve settling time is explored in Figure System and control settings are summarized in Table 5-2. The upstream machine is more efficient here compared to the previous example as indicated by the larger repair probability. When controlling expected buffer, set point is set to 30% of maximum buffer size. The system starts with an empty buffer, in the upstream repair state and downstream demand state, (0, R, 1). An inherent limitation when trying to fairly compare controlled and uncontrolled systems is that, as shown in Figure 5-5 and Figure 5-7, the controlled system wait state transition probabilities u 1, u 2 change at every time step whereas the uncontrolled system probabilities remain unchanged. To find equivalence between the two systems, it was determined that the repair state probabilities should remain identical between the controlled and uncontrolled systems, but that uncontrolled wait states probabilities should be found such that the uncontrolled steady state output level equals the control set point. In other words, the transient behavior of the uncontrolled system represents the transient behavior of a static Markov transition matrix that 183

203 converges at the steady state to the control set-point. In order to achieve this, an optimization scheme was devised that minimizes error between the uncontrolled steady-state output and controlled set point as a function of the wait state probabilities. The starting point for such a minimization is the average of the control variables u 1, u 2 over transient time. To find the uncontrolled steady-state output, the solution method of Chapter 4 was used. Figure 5-12 shows that when controlling expected inventory levels, over a wide range of demand rates, control action significantly reduces settling time at approximately the same level of inventory that the uncontrolled system achieves at steady state. The experiment was repeated at several other inventory set points, as well as a start state of (0, W, 1) with similar significant improvements of settling times compared to equivalent uncontrolled systems. Table 5-2- Input values and system parameters, comparison of controlled and uncontrolled settling times when controlling buffer and service rate Markov model properties Predictive control parameters Set points and initial control conditions Controlled output (expected values) Buffer Content (n) Service rate (ς) Failure probability (p u ) Repair probability (r u ) Poisson parameter (λ) varies varies Demand rate (μ) varies varies Buffer size (N) Prediction horizon (H p ) 4 4 Control horizon (H u ) 4 4 Penalty horizon(h w ) 1 1 Control set point % of maximum achievable service rate Initial state (0, R, 1) (0, R, 1) Initial control values (u 0 ) [ 0 0 ] [0 0 ] 184

204 Figure Comparison of controlled and uncontrolled settling times when controlling buffer levels A similar experiment was conducted comparing settling times when controlling expected service rate. As shown multiple times in the context of Poisson approximating downstream machines, the levels of achievable service rates change with the demand rate since by definition service rate includes the occurrence of demand. Therefore a constant service rate set point is not reasonable when varying demand rates. As a result, the service rate set point in the controlled system was selected to be a percentage of the maximum achievable service rate for that demand rate, as seen in Table 5-2. If the upstream machine was 100% efficient, i.e. it did not fail, maximum achievable service rate would be equal to demand rate. As this is not the case here, maximum achievable service rate can be found by increasing the set point until resulting service rate shows no increase. As before, at each demand rate, wait state probabilities for the 185

205 uncontrolled system were determined to yield a steady state service rate equal to the controlled set point. The same process of minimization and methods of Chapter 4 for steady-state solution were employed. As can be seen in Figure 5-13, settle time for an uncontrolled system is much larger than a controlled system when controlling service rate, but not in magnitude seen when controlling inventory level, Figure This is mainly attributable to the process of finding equivalent uncontrolled systems as the behavior of control variables are different when controlling buffer level and service rate and this affects the optimal values of equivalent uncontrolled systems. It is also noteworthy that the settling time of controlled service rate remains almost constant when demand rate increases. This is mostly the result of control action. In general, it was found that settle times are smaller when controlling service rate than they are when controlling buffer. This can be explained in terms of the balance between the upstream efficiency and the downstream demand. The upstream supply system represented in Table 5-2 is quite efficient, at least initially before control action occurs. This makes it relatively easy for the upstream supply to satisfy demand and achieve the 90% set point for service rate. Filling up an empty buffer, however, requires a certain number of time steps to transpire. 186

206 Figure Comparison of controlled and uncontrolled settling times when controlling service rates The effect of buffer size on the settling time of buffer is evident. Larger buffers take longer to fill, resulting in a larger settling time. Also, larger set points were seen to increase settling time. Both these effects are explored in Figure 5-14 where settling times are shown against buffer size at several buffer set point-to-size ratios. System settings are summarized in Table 5-3, control settings for prediction and control horizons are identical to previous examples. It is seen at any given buffer size, bigger set points lead to larger settling times and if the buffer set point is kept at a given ratio of the buffer size, larger buffer size leads to longer settling times. Results were identical for a start state of (0, R, 1). It appears therefore that keeping a small buffer set point to size ratio is more effective at controlling transient settle times than reducing buffer size. 187

207 Table 5-3- System and control settings exploring the effect of buffer size on settling times Controlled output (expected values) Buffer Content (n) Failure probability (p u ) 0.08 Repair probability (r u ) 0.9 Markov model properties Poisson parameter (λ) 1 Demand rate (μ) 0.5 Buffer size (N) varies Control set point varies Set points and initial Initial state (0, W, 1) control conditions Initial control values (u 0 ) [ 0 0 ] Figure Effects of buffer size and set point on settling time Upstream machine isolated efficiency was defined in Chapter 4 as the throughput of the upstream machine in isolation without interaction with the buffer or the downstream demand 188

208 system. In the system defined here, as control variables u 1, u 2 change through time, so does the upstream machine efficiency. Therefore to explore the effects of efficiency and upstream probabilities of failure and repair, initial efficiency is defined as the efficiency before the application of control. This is defined mathematically in equation e initial = p u ru + u 1(0) 1 u 2 (0) 5.23 Table 5-4 summarizes system and control settings when controlling buffer level at different efficiencies to study effects on settling time. In order to analyze the influence of failure and repair probability, two sets of studies were developed. The first looks at several different failure probabilities p u and at each failure probability varies the repair probability r u to change efficiency levels. The results are shown in Figure The horizontal axis represents the upstream machine efficiency found by keeping failure probability constant and varying the repair probability. The vertical axis represents settling time in time steps. Several different failure probabilities are studied and marked by different colors and plot markers. It is seen that at small failure probabilities, e.g. p u = 0.15, p u = 0.25, increasing upstream efficiency through increasing repair probability seems to have little effect on settling time. For larger failure probabilities, increasing efficiency by increasing repair probability leads to significant improvements in settling time. In general, it is also seen that larger failure probabilities lead to larger settling times, which is intuitive in the sense that a system that fails more often takes longer to get the buffer to it set point. This study was done using the same repair probabilities, therefore, the efficiency levels generated between different failure probabilities are not uniform. 189

209 Table 5-4- System and control settings for the study of effect of efficiency on buffer settling times Controlled output (expected values) Markov model properties Predictive control parameters Set points and initial control conditions Failure probability (p u ) Repair probability (r u ) Buffer Content (n) varies varies Poisson parameter (λ) 3 Demand rate (μ) 0.25 Buffer size (N) 6 Prediction horizon (H p ) 4 Control horizon (H u ) 4 Penalty horizon(h w ) 1 Control set point 2.56 Initial state (0, W, 1) Initial control values (u 0 ) [ ] The second study explores the effect of efficiency on buffer settling time at different repair probabilities r u by varying the failure probability p u. System and control settings are from Table 5-4. The effect of changes in efficiency when varying failure probabilities is shown in Figure It is seen here that at larger efficiencies, starting at e initial = 0.65 and upwards, increasing repair probability or decreasing failure probability seems to have negligible effect on settling time. However, at all repair probability levels, increasing efficiency by decreasing the probability of failure leads to significant improvements in buffer settling time. For obvious reasons, this behavior is also dependent on demand rate as shown in Figure 5-12, but the effect of demand rate on settling time is not as pronounced as the effect of efficiency. Therefore, it appears that decreasing failures is an effective tool, perhaps more effective than increasing repair rates, in controlling transient settle time. 190

210 Figure Controlled buffer settling time against upstream efficiency at several failure probabilities Figure Buffer settling time against upstream initial efficiency at several repair probabilities 191

211 Demand loss Another important metric that can be extracted from the Markov chain control application to characterize the transient behavior of manufacturing systems is lost sales opportunities when the system is in a transient state. In the production inventory demand model, this is called demand loss and defined as the total probability over settling time of demand occurring and not being met due to lack of inventory To have a proper mathematical description, demand loss is denoted and defined in equation It must be noted that demand loss as defined here is a summation over time of probabilities and different from the steady-state probability of starvation studied in Chapters 3 and 4. Demand loss is conceptually, but not mathematically, similar to the concept of production losses proposed in [41] in the analysis of Bernoulli machines. It is seen therefore that demand loss is intrinsically and by definition connected to settling time. In an effort to decouple these two, average demand loss is defined as demand loss divided by settling time. It is denoted and mathematically defined in equation 5.24b. It represents the average number of times when demand occurs and is not met due to inventory shortage. Δ = prob[demand occurring and buffer level = 0] Δ = Δ all states 5.24a 5.24b A comparison of controlled and uncontrolled demand losses shows that control action results in almost universally smaller demand loss and mostly smaller average demand loss for the controlled system across a range of demand rates, when controlling for both inventory level and service rates at different set points and start states. An example is shown for the system summarized in Table 5-2. Figure 5-17 shows controlled and uncontrolled demand loss across a 192

212 range of demand rates when controlling inventory level. Demand loss in units is shown on the vertical axis and demand rate is on the horizontal axis. In order to find equivalence between the controlled and uncontrolled systems, uncontrolled wait state probabilities u 1, u 2 were optimized to give a steady-state output level that is as close as possible to the set point of the controlled system. To find these steady-state values, the solution methodologies of Chapter 4 were used. As seen in Figure 5-17, controlled system demand loss remains small but increases slightly as demand rate increases. This increase is the result of an increasing demand rate which inevitably creates more instances of demand that is not satisfied. Control action however mitigates this effect by adjusting the rate of production. Uncontrolled demand loss is considerably larger at all demand rates but decreases as demand rate increases. This decrease in demand loss is due to the values of control variables that are found from finding equivalence between controlled and uncontrolled systems. If the u 1, u 2 probabilities are kept constant for the uncontrolled system at every demand rate, uncontrolled demand loss will increase with increasing demand rate. Figure 5-18 shows average demand loss for the same two systems. Controlled average demand loss remains constant and very small at all demand rates. This is due to the effect of control that keeps both settling time and demand loss small and almost constant, as displayed in Figure 5-12 and Figure Uncontrolled average demand loss shows some fluctuation due mostly to the effect of uncontrolled settling time, illustrated in Figure At large demand rates, uncontrolled average demand loss becomes almost equal to controlled average demand loss, due to the large uncontrolled settling time. 193

213 Figure A comparison of demand loss across a range of demand rates between a controlled system and equivalent uncontrolled systems Figure Average demand loss at different demand rates compared between a controlled system and equivalent uncontrolled systems 194

214 Figure 5-19 shows a similar comparison of controlled and uncontrolled demand loss when controlling service rate to the settings of Table 5-2. Demand loss is significantly smaller in the controlled system compared to an uncontrolled system when output levels are similar. Viewed independently, controlled system demand loss shows slight increase with increasing demand rate, which is the result of control system countering the effects of increase in demand with changes in production rate. Figure 5-20 shows average demand loss when controlling service rates and the equivalent uncontrolled system. Controlled average demand loss seems to be larger here, but this is mainly due to the difference in magnitudes between values of demand loss and settling time. Figure Comparison of controlled and uncontrolled demand loss at several demand rates when controlling service rate 195

215 Figure Controlled and uncontrolled average demand loss when controlling service Figure Effect of the ratio of buffer set point to size on demand loss when controlling buffer 196

216 The effects of buffer set point and size on demand loss are studied in Figure 5-21 where system and control settings are identical to Table 5-3. It is seen that at most buffer set point-tosize ratios, increasing buffer size does not result in a significant improvement in demand loss. Only if the set point is much smaller than the buffer size, i.e. at a ratio of 0.2, does increasing buffer size lead to a sizeable improvement in demand loss. The effect of buffer set point to size ratio on demand loss at very small set point to size ratios is opposite the effect on settle times. This is again intuitive as the controller adjusts to maintain low inventory, losing demand in the process. Therefore setting a very small set point compared to the size of the buffer is not recommended. Setting buffer set point at about half the buffer size appears to be a good compromise between demand loss and settling time for the system in Table Cumulative service rate error Cumulative service rate error characterizes the sum total during transience of service rate deviation from its set-point or its steady state value and is conceptually similar to measuring overshoot in a traditional control system. A distinction must be made in the calculation of cumulative service rate error when controlling inventory levels and controlling service rate. When controlling inventory levels, only service rates smaller than the service rate at steady state are undesirable as this represents a degradation of the service rate associated with carrying that level of inventory. When controlling service rate, however, any deviation from the service rate set point is undesirable. Therefore, when controlling inventory level, cumulative service rate error is defined during transience as the total difference between steady service rate and service rate falling below the steady state level. When controlling for service rate, cumulative service rate error is defined during transience as the absolute total difference between the service rate set point and actual service rates. Cumulative service rate error is defined similarly for the 197

217 uncontrolled system, with the difference that for both uncontrolled buffer and service rate, steady state values are derived from the Markov chain solution of Chapter 4. Table 5-5 compares the cumulative service rate error for controlled and uncontrolled systems when controlling buffer with the system settings of Table 5-2. As before, equivalence was achieved between controlled and uncontrolled systems by adjusting uncontrolled wait state probabilities to give a steady-state buffer level equal to the controlled system set point. For the controlled system, service rate does not fall below its steady value at most demand rates, resulting in no cumulative service rate error, marked by zeros in Table 5-5. It is clear from this table that control action leads to significant improvement in cumulative service rate error. Table 5-5- Comparison of controlled and equivalent uncontrolled cumulative service rate error when controlling buffer Demand Rate Controlled Service Rate Error Uncontrolled Service Rate Error

218 Figure 5-22 shows cumulative service rate error for a controlled system and its equivalent uncontrolled system when controlling service at several demand rates. Recalling from Figure 5-13, uncontrolled system settling time is an order of magnitude larger than controlled system settling time. Due to this huge disparity, cumulative service rate error for the uncontrolled system is inflated during its settle time. To decouple the effect of settle time, the uncontrolled cumulative service rate error was also observed over the same period as the controlled settle time. This is also shown in Figure Not only is the controlled cumulative service rate error smaller than uncontrolled cumulative service rate error during their respective settle times, but controlled cumulative service rate error is mostly reduced over the shorter controlled system settle time period. Figure Controlled and uncontrolled cumulative service rate errors when controlling service rate 199

MIT Manufacturing Systems Analysis Lectures 18 19

MIT Manufacturing Systems Analysis Lectures 18 19 MIT 2.852 Manufacturing Systems Analysis Lectures 18 19 Loops Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Problem Statement B 1 M 2 B 2 M 3 B 3 M 1 M 4 B 6 M 6 B 5 M 5 B 4 Finite

More information

The two-machine one-buffer continuous time model with restart policy

The two-machine one-buffer continuous time model with restart policy Noname manuscript No. (will be inserted by the editor) The two-machine one-buffer continuous time model with restart policy Elisa Gebennini Andrea Grassi Cesare Fantuzzi Received: date / Accepted: date

More information

Single-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin

Single-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin Single-part-type, multiple stage systems Lecturer: Stanley B. Gershwin Flow Line... also known as a Production or Transfer Line. M 1 B 1 M 2 B 2 M 3 B 3 M 4 B 4 M 5 B 5 M 6 Machine Buffer Machines are

More information

MIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines

MIT Manufacturing Systems Analysis Lectures 6 9: Flow Lines 2.852 Manufacturing Systems Analysis 1/165 Copyright 2010 c Stanley B. Gershwin. MIT 2.852 Manufacturing Systems Analysis Lectures 6 9: Flow Lines Models That Can Be Analyzed Exactly Stanley B. Gershwin

More information

MODELING AND ANALYSIS OF SPLIT AND MERGE PRODUCTION SYSTEMS

MODELING AND ANALYSIS OF SPLIT AND MERGE PRODUCTION SYSTEMS University of Kentucky UKnowledge University of Kentucky Master's Theses Graduate School 008 MODELING AND ANALYSIS OF SPLIT AND MERGE PRODUCTION SYSTEMS Yang Liu University of Kentucky, yang.liu@uky.edu

More information

PERFORMANCE ANALYSIS OF PRODUCTION SYSTEMS WITH REWORK LOOPS

PERFORMANCE ANALYSIS OF PRODUCTION SYSTEMS WITH REWORK LOOPS PERFORMANCE ANALYSIS OF PRODUCTION SYSTEMS WITH REWORK LOOPS Jingshan Li Enterprise Systems Laboratory General Motors Research & Development Center Mail Code 480-106-359 30500 Mound Road Warren, MI 48090-9055

More information

Transient Analysis of Single Machine Production Line Dynamics

Transient Analysis of Single Machine Production Line Dynamics International Journal of Operations Research International Journal of Operations Research Vol. 11, No. 2, 040 050 (2014) Transient Analysis of Single Machine Production Line Dynamics Farhood Rismanchian

More information

Representation and Analysis of Transfer Lines. with Machines That Have Different Processing Rates

Representation and Analysis of Transfer Lines. with Machines That Have Different Processing Rates March, 1985 Revised November, 1985 LIDS-P-1446 Representation and Analysis of Transfer Lines with Machines That Have Different Processing Rates by Stanley B. Gershwin 35-433 Massachusetts Institute of

More information

An Aggregation Method for Performance Evaluation of a Tandem Homogenous Production Line with Machines Having Multiple Failure Modes

An Aggregation Method for Performance Evaluation of a Tandem Homogenous Production Line with Machines Having Multiple Failure Modes An Aggregation Method for Performance Evaluation of a Tandem Homogenous Production Line with Machines Having Multiple Failure Modes Ahmed-Tidjani Belmansour Mustapha Nourelfath November 2008 CIRRELT-2008-53

More information

Bottlenecks in Markovian Production Lines: A Systems Approach

Bottlenecks in Markovian Production Lines: A Systems Approach 352 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 2, APRIL 1998 Bottlenecks in Markovian Production Lines: A Systems Approach S.-Y. Chiang, C.-T. Kuo, and S. M. Meerkov Abstract In this paper,

More information

0utline. 1. Tools from Operations Research. 2. Applications

0utline. 1. Tools from Operations Research. 2. Applications 0utline 1. Tools from Operations Research Little s Law (average values) Unreliable Machine(s) (operation dependent) Buffers (zero buffers & infinite buffers) M/M/1 Queue (effects of variation) 2. Applications

More information

Production variability in manufacturing systems: Bernoulli reliability case

Production variability in manufacturing systems: Bernoulli reliability case Annals of Operations Research 93 (2000) 299 324 299 Production variability in manufacturing systems: Bernoulli reliability case Jingshan Li and Semyon M. Meerkov Department of Electrical Engineering and

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

PRODUCTVIVITY IMPROVEMENT MODEL OF AN UNRELIABLE THREE STATIONS AND A SINGLE BUFFER WITH CONTINUOUS MATERIAL FLOW

PRODUCTVIVITY IMPROVEMENT MODEL OF AN UNRELIABLE THREE STATIONS AND A SINGLE BUFFER WITH CONTINUOUS MATERIAL FLOW Vol. 3, No. 1, Fall, 2016, pp. 23-40 ISSN 2158-835X (print), 2158-8368 (online), All Rights Reserved PRODUCTVIVITY IMPROVEMENT MODEL OF AN UNRELIABLE THREE STATIONS AND A SINGLE BUFFER WITH CONTINUOUS

More information

MIT Manufacturing Systems Analysis Lecture 10 12

MIT Manufacturing Systems Analysis Lecture 10 12 2.852 Manufacturing Systems Analysis 1/91 Copyright 2010 c Stanley B. Gershwin. MIT 2.852 Manufacturing Systems Analysis Lecture 10 12 Transfer Lines Long Lines Stanley B. Gershwin http://web.mit.edu/manuf-sys

More information

Adam Caromicoli. Alan S. Willsky 1. Stanley B. Gershwin 2. Abstract

Adam Caromicoli. Alan S. Willsky 1. Stanley B. Gershwin 2. Abstract December 1987 LIDS-P-1727 MULTIPLE TIME SCALE ANALYSIS OF MANUFACTURING SYSTEMS Adam Caromicoli Alan S. Willsky 1 Stanley B. Gershwin 2 Abstract In this paper we use results on the aggregation of singularly

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP Glossary availability The long-run average fraction of time that the processor is available for processing jobs, denoted by a (p. 113). cellular manufacturing The concept of organizing the factory into

More information

Type 1. Type 1 Type 2 Type 2 M 4 M 1 B 41 B 71 B 31 B 51 B 32 B 11 B 12 B 22 B 61 M 3 M 2 B 21

Type 1. Type 1 Type 2 Type 2 M 4 M 1 B 41 B 71 B 31 B 51 B 32 B 11 B 12 B 22 B 61 M 3 M 2 B 21 Design and Operation of Manufacturing Systems The Control-Point Policy by Stanley B. Gershwin Institute Massachusetts Technology of Cambridge, 02139 USA Massachusetts I I E Annual Conference Orlando, Florida,

More information

57:022 Principles of Design II Final Exam Solutions - Spring 1997

57:022 Principles of Design II Final Exam Solutions - Spring 1997 57:022 Principles of Design II Final Exam Solutions - Spring 1997 Part: I II III IV V VI Total Possible Pts: 52 10 12 16 13 12 115 PART ONE Indicate "+" if True and "o" if False: + a. If a component's

More information

Stochastic Modeling and Analysis of Generalized Kanban Controlled Unsaturated finite capacitated Multi-Stage Production System

Stochastic Modeling and Analysis of Generalized Kanban Controlled Unsaturated finite capacitated Multi-Stage Production System Stochastic Modeling and Analysis of Generalized anban Controlled Unsaturated finite capacitated Multi-Stage Production System Mitnala Sreenivasa Rao 1 orada Viswanatha Sharma 2 1 Department of Mechanical

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

A system dynamics modelling, simulation and analysis of production line systems

A system dynamics modelling, simulation and analysis of production line systems A system dynamics modelling, simulation and analysis of production line systems for the continuous models of transfer lines with unreliable machines and finite buffers C F Shooshtarian a* and G Jones b

More information

Transient Performance Evaluation, Bottleneck Analysis and Control of Production Systems

Transient Performance Evaluation, Bottleneck Analysis and Control of Production Systems University of Connecticut DigitalCommons@UConn Doctoral Dissertations University of Connecticut Graduate School 4-19-2017 Transient Performance Evaluation, Bottleneck Analysis and Control of Production

More information

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 Contents Preface to Second Edition Preface to First Edition Abbreviations xv xvii xix PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 1 The Role of Statistical Methods in Modern Industry and Services

More information

Design of Cellular Manufacturing Systems for Dynamic and Uncertain Production Requirements with Presence of Routing Flexibility

Design of Cellular Manufacturing Systems for Dynamic and Uncertain Production Requirements with Presence of Routing Flexibility Design of Cellular Manufacturing Systems for Dynamic and Uncertain Production Requirements with Presence of Routing Flexibility Anan Mungwattana Dissertation submitted to the Faculty of the Virginia Polytechnic

More information

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1. IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions.

More information

Bu er capacity for accommodating machine downtime in serial production lines

Bu er capacity for accommodating machine downtime in serial production lines int. j. prod. res., 2002, vol. 40, no. 3, 601±624 Bu er capacity for accommodating machine downtime in serial production lines EMRE ENGINARLARy, JINGSHAN LIz, SEMYON M. MEERKOVy* and RACHEL Q. ZHANG This

More information

Single-part-type, multiple stage systems

Single-part-type, multiple stage systems MIT 2.853/2.854 Introduction to Manufacturing Systems Single-part-type, multiple stage systems Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Single-stage,

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

The Markov Decision Process (MDP) model

The Markov Decision Process (MDP) model Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the

More information

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths

More information

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE MULTIPLE CHOICE QUESTIONS DECISION SCIENCE 1. Decision Science approach is a. Multi-disciplinary b. Scientific c. Intuitive 2. For analyzing a problem, decision-makers should study a. Its qualitative aspects

More information

Contents LIST OF TABLES... LIST OF FIGURES... xvii. LIST OF LISTINGS... xxi PREFACE. ...xxiii

Contents LIST OF TABLES... LIST OF FIGURES... xvii. LIST OF LISTINGS... xxi PREFACE. ...xxiii LIST OF TABLES... xv LIST OF FIGURES... xvii LIST OF LISTINGS... xxi PREFACE...xxiii CHAPTER 1. PERFORMANCE EVALUATION... 1 1.1. Performance evaluation... 1 1.2. Performance versus resources provisioning...

More information

MIT Manufacturing Systems Analysis Lectures 15 16: Assembly/Disassembly Systems

MIT Manufacturing Systems Analysis Lectures 15 16: Assembly/Disassembly Systems 2.852 Manufacturing Systems Analysis 1/41 Copyright 2010 c Stanley B. Gershwin. MIT 2.852 Manufacturing Systems Analysis Lectures 15 16: Assembly/Disassembly Systems Stanley B. Gershwin http://web.mit.edu/manuf-sys

More information

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems Exercises Solutions Note, that we have not formulated the answers for all the review questions. You will find the answers for many questions by reading and reflecting about the text in the book. Chapter

More information

Markov Processes and Queues

Markov Processes and Queues MIT 2.853/2.854 Introduction to Manufacturing Systems Markov Processes and Queues Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Markov Processes

More information

QUEUING SYSTEM. Yetunde Folajimi, PhD

QUEUING SYSTEM. Yetunde Folajimi, PhD QUEUING SYSTEM Yetunde Folajimi, PhD Part 2 Queuing Models Queueing models are constructed so that queue lengths and waiting times can be predicted They help us to understand and quantify the effect of

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

MODELING AND PROPERTIES OF GENERALIZED KANBAN CONTROLLED ASSEMBLY SYSTEMS 1

MODELING AND PROPERTIES OF GENERALIZED KANBAN CONTROLLED ASSEMBLY SYSTEMS 1 MODELING AND PROPERTIES OF GENERALIZED KANBAN CONTROLLED ASSEMBLY SYSTEMS 1 Sbiti N. *, Di Mascolo M. **, Amghar M.* * Ecole Mohammadia d Ingénieurs, Université Mohammed V-Agdal, Avenue Ibn Sina, BP 765,

More information

Analytical Approximations to Predict Performance Measures of Manufacturing Systems with Job Failures and Parallel Processing

Analytical Approximations to Predict Performance Measures of Manufacturing Systems with Job Failures and Parallel Processing Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 3-12-2010 Analytical Approximations to Predict Performance Measures of Manufacturing

More information

QUEUING MODELS AND MARKOV PROCESSES

QUEUING MODELS AND MARKOV PROCESSES QUEUING MODELS AND MARKOV ROCESSES Queues form when customer demand for a service cannot be met immediately. They occur because of fluctuations in demand levels so that models of queuing are intrinsically

More information

Cost Analysis of a vacation machine repair model

Cost Analysis of a vacation machine repair model Available online at www.sciencedirect.com Procedia - Social and Behavioral Sciences 25 (2011) 246 256 International Conference on Asia Pacific Business Innovation & Technology Management Cost Analysis

More information

ACHIEVING OPTIMAL DESIGN OF THE PRODUCTION LINE WITH OBTAINABLE RESOURCE CAPACITY. Miao-Sheng CHEN. Chun-Hsiung LAN

ACHIEVING OPTIMAL DESIGN OF THE PRODUCTION LINE WITH OBTAINABLE RESOURCE CAPACITY. Miao-Sheng CHEN. Chun-Hsiung LAN Yugoslav Journal of Operations Research 12 (2002), Number 2, 203-214 ACHIEVING OPTIMAL DESIGN OF THE PRODUCTION LINE WITH OBTAINABLE RESOURCE CAPACITY Miao-Sheng CHEN Graduate Institute of Management Nanhua

More information

WIDE AREA CONTROL THROUGH AGGREGATION OF POWER SYSTEMS

WIDE AREA CONTROL THROUGH AGGREGATION OF POWER SYSTEMS WIDE AREA CONTROL THROUGH AGGREGATION OF POWER SYSTEMS Arash Vahidnia B.Sc, M.Sc in Electrical Engineering A Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy

More information

A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks

A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks by Doo Il Choi, Charles Knessl and Charles Tier University of Illinois at Chicago 85 South

More information

Electronic Companion Fluid Models for Overloaded Multi-Class Many-Server Queueing Systems with FCFS Routing

Electronic Companion Fluid Models for Overloaded Multi-Class Many-Server Queueing Systems with FCFS Routing Submitted to Management Science manuscript MS-251-27 Electronic Companion Fluid Models for Overloaded Multi-Class Many-Server Queueing Systems with FCFS Routing Rishi Talreja, Ward Whitt Department of

More information

CHAPTER 3 STOCHASTIC MODEL OF A GENERAL FEED BACK QUEUE NETWORK

CHAPTER 3 STOCHASTIC MODEL OF A GENERAL FEED BACK QUEUE NETWORK CHAPTER 3 STOCHASTIC MODEL OF A GENERAL FEED BACK QUEUE NETWORK 3. INTRODUCTION: Considerable work has been turned out by Mathematicians and operation researchers in the development of stochastic and simulation

More information

A PARAMETRIC DECOMPOSITION BASED APPROACH FOR MULTI-CLASS CLOSED QUEUING NETWORKS WITH SYNCHRONIZATION STATIONS

A PARAMETRIC DECOMPOSITION BASED APPROACH FOR MULTI-CLASS CLOSED QUEUING NETWORKS WITH SYNCHRONIZATION STATIONS A PARAMETRIC DECOMPOSITION BASED APPROACH FOR MULTI-CLASS CLOSED QUEUING NETWORKS WITH SYNCHRONIZATION STATIONS Kumar Satyam and Ananth Krishnamurthy Department of Decision Sciences and Engineering Systems,

More information

Development and Application of a New Modeling Technique for Production Control Schemes in Manufacturing Systems

Development and Application of a New Modeling Technique for Production Control Schemes in Manufacturing Systems Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2005-05-12 Development and Application of a New Modeling Technique for Production Control Schemes in Manufacturing Systems Bashar

More information

An Introduction to Stochastic Modeling

An Introduction to Stochastic Modeling F An Introduction to Stochastic Modeling Fourth Edition Mark A. Pinsky Department of Mathematics Northwestern University Evanston, Illinois Samuel Karlin Department of Mathematics Stanford University Stanford,

More information

Contents. Set Theory. Functions and its Applications CHAPTER 1 CHAPTER 2. Preface... (v)

Contents. Set Theory. Functions and its Applications CHAPTER 1 CHAPTER 2. Preface... (v) (vii) Preface... (v) CHAPTER 1 Set Theory Definition of Set... 1 Roster, Tabular or Enumeration Form... 1 Set builder Form... 2 Union of Set... 5 Intersection of Sets... 9 Distributive Laws of Unions and

More information

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974 THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS by S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT This note describes a simulation experiment involving

More information

Contents. Chapter 1 Vector Spaces. Foreword... (vii) Message...(ix) Preface...(xi)

Contents. Chapter 1 Vector Spaces. Foreword... (vii) Message...(ix) Preface...(xi) (xiii) Contents Foreword... (vii) Message...(ix) Preface...(xi) Chapter 1 Vector Spaces Vector space... 1 General Properties of vector spaces... 5 Vector Subspaces... 7 Algebra of subspaces... 11 Linear

More information

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Finite State Machines Introduction Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Such devices form

More information

UNCERTAINTY ANALYSIS OF TWO-SHAFT GAS TURBINE PARAMETER OF ARTIFICIAL NEURAL NETWORK (ANN) APPROXIMATED FUNCTION USING SEQUENTIAL PERTURBATION METHOD

UNCERTAINTY ANALYSIS OF TWO-SHAFT GAS TURBINE PARAMETER OF ARTIFICIAL NEURAL NETWORK (ANN) APPROXIMATED FUNCTION USING SEQUENTIAL PERTURBATION METHOD UNCERTAINTY ANALYSIS OF TWO-SHAFT GAS TURBINE PARAMETER OF ARTIFICIAL NEURAL NETWORK (ANN) APPROXIMATED FUNCTION USING SEQUENTIAL PERTURBATION METHOD HILMI ASYRAF BIN RAZALI Report submitted in partial

More information

Series Expansions in Queues with Server

Series Expansions in Queues with Server Series Expansions in Queues with Server Vacation Fazia Rahmoune and Djamil Aïssani Abstract This paper provides series expansions of the stationary distribution of finite Markov chains. The work presented

More information

Contents. Chapter 1 Vector Spaces. Foreword... (vii) Message...(ix) Preface...(xi)

Contents. Chapter 1 Vector Spaces. Foreword... (vii) Message...(ix) Preface...(xi) (xiii) Contents Foreword... (vii) Message...(ix) Preface...(xi) Chapter 1 Vector Spaces Vector space... 1 General Properties of vector spaces... 5 Vector Subspaces... 7 Algebra of subspaces... 11 Linear

More information

Review Paper Machine Repair Problem with Spares and N-Policy Vacation

Review Paper Machine Repair Problem with Spares and N-Policy Vacation Research Journal of Recent Sciences ISSN 2277-2502 Res.J.Recent Sci. Review Paper Machine Repair Problem with Spares and N-Policy Vacation Abstract Sharma D.C. School of Mathematics Statistics and Computational

More information

Performance evaluation of production lines with unreliable batch machines and random processing time

Performance evaluation of production lines with unreliable batch machines and random processing time Politecnico di Milano Scuola di Ingegneria di Sistemi Polo territoriale di Como Master Graduation Thesis Performance evaluation of production lines with unreliable batch machines and random processing

More information

Tutorial: Optimal Control of Queueing Networks

Tutorial: Optimal Control of Queueing Networks Department of Mathematics Tutorial: Optimal Control of Queueing Networks Mike Veatch Presented at INFORMS Austin November 7, 2010 1 Overview Network models MDP formulations: features, efficient formulations

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

HITTING TIME IN AN ERLANG LOSS SYSTEM

HITTING TIME IN AN ERLANG LOSS SYSTEM Probability in the Engineering and Informational Sciences, 16, 2002, 167 184+ Printed in the U+S+A+ HITTING TIME IN AN ERLANG LOSS SYSTEM SHELDON M. ROSS Department of Industrial Engineering and Operations

More information

A Bernoulli Model of Selective Assembly Systems

A Bernoulli Model of Selective Assembly Systems Preprints of the 19th World Congress The International Federation of Automatic Control A Bernoulli Model of Selective Assembly Systems Feng Ju and Jingshan Li Department of Industrial and Systems Engineering,

More information

Approximate analysis of single-server tandem queues with finite buffers

Approximate analysis of single-server tandem queues with finite buffers Ann Oper Res (2013) 209:67 84 DOI 10.1007/s10479-011-1021-1 Approximate analysis of single-server tandem queues with finite buffers Remco Bierbooms Ivo J.B.F. Adan Marcel van Vuuren Published online: 16

More information

STATISTICS; An Introductory Analysis. 2nd hidition TARO YAMANE NEW YORK UNIVERSITY A HARPER INTERNATIONAL EDITION

STATISTICS; An Introductory Analysis. 2nd hidition TARO YAMANE NEW YORK UNIVERSITY A HARPER INTERNATIONAL EDITION 2nd hidition TARO YAMANE NEW YORK UNIVERSITY STATISTICS; An Introductory Analysis A HARPER INTERNATIONAL EDITION jointly published by HARPER & ROW, NEW YORK, EVANSTON & LONDON AND JOHN WEATHERHILL, INC.,

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

A Study on M x /G/1 Queuing System with Essential, Optional Service, Modified Vacation and Setup time

A Study on M x /G/1 Queuing System with Essential, Optional Service, Modified Vacation and Setup time A Study on M x /G/1 Queuing System with Essential, Optional Service, Modified Vacation and Setup time E. Ramesh Kumar 1, L. Poornima 2 1 Associate Professor, Department of Mathematics, CMS College of Science

More information

Availability. M(t) = 1 - e -mt

Availability. M(t) = 1 - e -mt Availability Availability - A(t) the probability that the system is operating correctly and is available to perform its functions at the instant of time t More general concept than reliability: failure

More information

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review

More information

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations. MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations. inventory model: storage costs manpower planning model: salary costs machine reliability model: repair costs We will

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

Discrete-Time Markov Decision Processes

Discrete-Time Markov Decision Processes CHAPTER 6 Discrete-Time Markov Decision Processes 6.0 INTRODUCTION In the previous chapters we saw that in the analysis of many operational systems the concepts of a state of a system and a state transition

More information

Chapter 1. Introduction. 1.1 Stochastic process

Chapter 1. Introduction. 1.1 Stochastic process Chapter 1 Introduction Process is a phenomenon that takes place in time. In many practical situations, the result of a process at any time may not be certain. Such a process is called a stochastic process.

More information

J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY

J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY SECOND EDITION ACADEMIC PRESS An imprint of Elsevier Science Amsterdam Boston London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo Contents

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

SCHEDULING POLICIES IN MULTI-PRODUCT MANUFACTURING SYSTEMS WITH SEQUENCE-DEPENDENT SETUP TIMES

SCHEDULING POLICIES IN MULTI-PRODUCT MANUFACTURING SYSTEMS WITH SEQUENCE-DEPENDENT SETUP TIMES Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. SCHEDULING POLICIES IN MULTI-PRODUCT MANUFACTURING SYSTEMS WITH SEQUENCE-DEPENDENT

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

Discrete-event simulations

Discrete-event simulations Discrete-event simulations Lecturer: Dmitri A. Moltchanov E-mail: moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Why do we need simulations? Step-by-step simulations; Classifications;

More information

Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System

Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System MichaelH.Veatch Francis de Véricourt October 9, 2002 Abstract We consider the dynamic scheduling of a two-part-type make-tostock

More information

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS J.R. ARTALEJO, Department of Statistics and Operations Research, Faculty of Mathematics, Complutense University of Madrid,

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Queuing Analysis. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Queuing Analysis. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall Queuing Analysis Chapter 13 13-1 Chapter Topics Elements of Waiting Line Analysis The Single-Server Waiting Line System Undefined and Constant Service Times Finite Queue Length Finite Calling Problem The

More information

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS Michael Ryan Hoff A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

An Introduction to Probability Theory and Its Applications

An Introduction to Probability Theory and Its Applications An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I

More information

Expected Time Delay in Multi-Item Inventory Systems with Correlated Demands

Expected Time Delay in Multi-Item Inventory Systems with Correlated Demands Expected Time Delay in Multi-Item Inventory Systems with Correlated Demands Rachel Q. Zhang Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Michigan 48109 Received

More information

Push and Pull Systems in a Dynamic Environment

Push and Pull Systems in a Dynamic Environment Push and Pull Systems in a Dynamic Environment ichael Zazanis Dept. of IEOR University of assachusetts Amherst, A 0003 email: zazanis@ecs.umass.edu Abstract We examine Push and Pull production control

More information

AN APPROPRIATE LOT SIZING TECHNIQUE FOR INVENTORY POLICY PROBLEM WITH DECREASING DEMAND

AN APPROPRIATE LOT SIZING TECHNIQUE FOR INVENTORY POLICY PROBLEM WITH DECREASING DEMAND AN APPROPRIATE LOT SIZING TECHNIQUE FOR INVENTORY POLICY PROBLEM WITH DECREASING DEMAND A THESIS Submitted in Partial Fulfillment of the Requirement for the Bachelor Degree of Engineering in Industrial

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Summer Review Packet. for students entering. AP Calculus BC

Summer Review Packet. for students entering. AP Calculus BC Summer Review Packet for students entering AP Calculus BC The problems in this packet are designed to help you review topics that are important to your success in AP Calculus. Please attempt the problems

More information

Retrial queue for cloud systems with separated processing and storage units

Retrial queue for cloud systems with separated processing and storage units Retrial queue for cloud systems with separated processing and storage units Tuan Phung-Duc Department of Mathematical and Computing Sciences Tokyo Institute of Technology Ookayama, Meguro-ku, Tokyo, Japan

More information

Multiserver Queueing Model subject to Single Exponential Vacation

Multiserver Queueing Model subject to Single Exponential Vacation Journal of Physics: Conference Series PAPER OPEN ACCESS Multiserver Queueing Model subject to Single Exponential Vacation To cite this article: K V Vijayashree B Janani 2018 J. Phys.: Conf. Ser. 1000 012129

More information

1 An Overview and Brief History of Feedback Control 1. 2 Dynamic Models 23. Contents. Preface. xiii

1 An Overview and Brief History of Feedback Control 1. 2 Dynamic Models 23. Contents. Preface. xiii Contents 1 An Overview and Brief History of Feedback Control 1 A Perspective on Feedback Control 1 Chapter Overview 2 1.1 A Simple Feedback System 3 1.2 A First Analysis of Feedback 6 1.3 Feedback System

More information

Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers

Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers Mohammad H. Yarmand and Douglas G. Down Department of Computing and Software, McMaster University, Hamilton, ON, L8S

More information