Tutorial: Optimal Control of Queueing Networks

Similar documents
A cµ Rule for Two-Tiered Parallel Servers

Zero-Inventory Conditions For a Two-Part-Type Make-to-Stock Production System

OPTIMAL CONTROL OF A FLEXIBLE SERVER

On the static assignment to parallel servers

ADMISSION CONTROL IN THE PRESENCE OF PRIORITIES: A SAMPLE PATH APPROACH

Maximum Pressure Policies in Stochastic Processing Networks

Dynamic Control of Parallel-Server Systems

Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers

1 Markov decision processes

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Dynamic Control of a Tandem Queueing System with Abandonments

Maximum pressure policies for stochastic processing networks

EFFECTS OF SYSTEM PARAMETERS ON THE OPTIMAL POLICY STRUCTURE IN A CLASS OF QUEUEING CONTROL PROBLEMS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

Stability and Asymptotic Optimality of h-maxweight Policies

Management of demand-driven production systems

Near-Optimal Control of Queueing Systems via Approximate One-Step Policy Improvement

Optimal and Heuristic Resource Allocation Policies in Serial Production Systems

Performance Evaluation of Queuing Systems

Copyright by Emre Arda Sisbot 2015

On the Partitioning of Servers in Queueing Systems during Rush Hour

A monotonic property of the optimal admission control to an M/M/1 queue under periodic observations with average cost criterion

MONOTONE CONTROL OF QUEUEING AND PRODUCTION/INVENTORY SYSTEMS. Michael H. Veatch and Lawrence M. Wein

Optimal Control of Parallel Make-To-Stock Queues in an Assembly System

Dynamic Matching Models

Strategic Dynamic Jockeying Between Two Parallel Queues

Control of Fork-Join Networks in Heavy-Traffic

A Semiconductor Wafer

Advanced Computer Networks Lecture 3. Models of Queuing

Minimizing response times and queue lengths in systems of parallel queues

Load Balancing in Distributed Service System: A Survey

Monotone control of queueing networks

STOCHASTIC MODELS FOR RELIABILITY, AVAILABILITY, AND MAINTAINABILITY

Effects of system parameters on the optimal policy structure in a class of queueing control problems

Stability of the two queue system

Individual, Class-based, and Social Optimal Admission Policies in Two-Priority Queues

Stochastic Shortest Path Problems

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

The shortest queue problem

Queues and Queueing Networks

Dynamically scheduling and maintaining a flexible server

State Space Collapse in Many-Server Diffusion Limits of Parallel Server Systems. J. G. Dai. Tolga Tezcan

STABILITY OF MULTICLASS QUEUEING NETWORKS UNDER LONGEST-QUEUE AND LONGEST-DOMINATING-QUEUE SCHEDULING

A PARAMETRIC DECOMPOSITION BASED APPROACH FOR MULTI-CLASS CLOSED QUEUING NETWORKS WITH SYNCHRONIZATION STATIONS

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Improved Algorithms for Machine Allocation in Manufacturing Systems

OPTIMAL CONTROL OF PARALLEL QUEUES WITH BATCH SERVICE

Cover Page. The handle holds various files of this Leiden University dissertation

Linear Model Predictive Control for Queueing Networks in Manufacturing and Road Traffic

This lecture is expanded from:

On the Pathwise Optimal Bernoulli Routing Policy for Homogeneous Parallel Servers

Servers. Rong Wu. Department of Computing and Software, McMaster University, Canada. Douglas G. Down

Robustness and performance of threshold-based resource allocation policies

Convexity Properties of Loss and Overflow Functions

Queueing Systems with Synergistic Servers

PROBABILISTIC SERVICE LEVEL GUARANTEES IN MAKE-TO-STOCK MANUFACTURING SYSTEMS

Queueing Systems with Synergistic Servers

Dynamic control of a tandem system with abandonments

Stochastic scheduling with event-based dynamic programming

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE

STABILITY AND INSTABILITY OF A TWO-STATION QUEUEING NETWORK

Slides 9: Queuing Models

Positive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network. Haifa Statistics Seminar May 5, 2008

Distributed Optimization. Song Chong EE, KAIST

Linear Programming Methods

Optimal Sleeping Mechanism for Multiple Servers with MMPP-Based Bursty Traffic Arrival

Two Workload Properties for Brownian Networks

NATCOR: Stochastic Modelling

On the Partitioning of Servers in Queueing Systems during Rush Hour

Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance

6 Solving Queueing Models

5 Lecture 5: Fluid Models

Markov Decision Processes Chapter 17. Mausam

TOWARDS BETTER MULTI-CLASS PARAMETRIC-DECOMPOSITION APPROXIMATIONS FOR OPEN QUEUEING NETWORKS

5/15/18. Operations Research: An Introduction Hamdy A. Taha. Copyright 2011, 2007 by Pearson Education, Inc. All rights reserved.

PBW 654 Applied Statistics - I Urban Operations Research

Simplex Algorithm for Countable-state Discounted Markov Decision Processes

OPTIMAL CONTROL OF STOCHASTIC NETWORKS - AN APPROACH VIA FLUID MODELS

A Heavy Traffic Approximation for Queues with Restricted Customer-Server Matchings

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Data analysis and stochastic modeling

Dynamic Call Center Routing Policies Using Call Waiting and Agent Idle Times Online Supplement

Part I Stochastic variables and Markov chains

Automatica. Policy iteration for customer-average performance optimization of closed queueing systems

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 3, MARCH

ISE/OR 762 Stochastic Simulation Techniques

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Electronic Companion Fluid Models for Overloaded Multi-Class Many-Server Queueing Systems with FCFS Routing

Approximate Dynamic Programming for Networks: Fluid Models and Constraint Reduction

Introduction to Queuing Networks Solutions to Problem Sheet 3

Single-part-type, multiple stage systems. Lecturer: Stanley B. Gershwin

Queue Decomposition and its Applications in State-Dependent Queueing Systems

Lecture 20: Reversible Processes and Queues

Markov Decision Processes Chapter 17. Mausam

arxiv: v1 [math.pr] 11 May 2018

Routing and Staffing in Large-Scale Service Systems: The Case of Homogeneous Impatient Customers and Heterogeneous Servers 1

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Transcription:

Department of Mathematics Tutorial: Optimal Control of Queueing Networks Mike Veatch Presented at INFORMS Austin November 7, 2010 1

Overview Network models MDP formulations: features, efficient formulations Software Greedy policies Fluid model policies Monotone control: switching curve policies M. Veatch INFORMS Austin 2

Modeling for Insight Answer questions such as How does the length of a production line affect optimal work-in process? How does the amount of cross-training of workers affect waiting time? How does the choice of which queue to serve depend on the queue lengths in a reentrant flow system? Moderate size networks Often interested in optimal policy or its structural properties Model the features that matter for the parameter range of interest. Check by simulating if needed. Understand greedy actions and the tradeoff that makes other actions optimal Understand the deterministic (fluid) equivalent M. Veatch INFORMS Austin 3

Single-class Networks λ 1 c 1 µ 1 c 2 µ 2 c 3 µ 3 c 4 µ 4 Series line Decisions: idle/busy at upstream servers ( c 1 < c2k < cn) Make-to-stock manufacturing system: finished goods are held to meet demand. Decisions: idle/busy; Veatch and Wein (1994, 1996) Routing control: arrival routing; Hajek (1984) µ 1 α router µ 2 M. Veatch INFORMS Austin 4

Multiclass Networks Decisions: class served by each server (or server idles) Multiple part types Part type 1 Part type k Reentrant flow.................. Machine 1 Machine 2 Machine m α 1 µ 1 µ 2 µ 3 M. Veatch INFORMS Austin 5

A Manufacturing Network Multiple part types, reentrant flow, probabilistic routing Taken from: S. Meyn, Controlling Complex Networks M. Veatch INFORMS Austin 6

Stochastic Processing Networks Parallel flexible servers Harrison (1998) Bell and Williams (2001) Arbitrary mapping from servers to classes λ 1 λ 2 c 1 c 2 1 2 classes µ 11 µ 21 µ 22 activities 1 2 servers Series line with flexible servers Andradottir, Ayhan, and Down (2003) Andradottir Ayhan (2005) Additional features Harrison, A broader view of Brownian networks (2003) Activities can use multiple servers, multiple job classes, and can produce multiple jobs (join and split) M. Veatch INFORMS Austin 7

Overview Network models MDP formulations: features, efficient formulations Software Greedy policies Fluid model policies Monotone control: switching curve policies M. Veatch INFORMS Austin 8

MDP Modeling Issues Describing states and transitions in Z n + gives a common language (not just manufacturing, communications, call centers, etc.) Event-based DP classifies the transitions. Koole (1998, 2007) Choosing the objective Time horizon? Infinite-horizon average cost is often realistic and avoids having to choose a discount rate Discounted has better theory Queue length or throughput/service level? This talk uses average weighted queue length Also practical to minimize queue length subject to a service level constraint, giving a constrained MDP. Sennott (2001), Altman (1999) M. Veatch INFORMS Austin 9

Average Cost MDP: Multiclass Network queue lengths x = (x 1,,x n ) n classes server σ(i) serves class i µ 3 exponential service/interarrival times, rates µ i and λ i, control u i = 1 if serve class i, 0 otherwise (λ i + µ i ) =1 i feasible actions: ui xi and ui 1for all machines σ i: σ ( i) = σ λ 1 µ 1 µ 2 Convert to discrete time uniformized transition probability matrix under policy u P u one-stage cost c T x average cost J differential cost of starting in state x relative to state 0: h(x) Bellman s equation T + h( x) = c x + min ( P h)( x), x Z n J u + u A( x) Solutions are optimal average cost J * and differential cost h * (x) M. Veatch INFORMS Austin 10

MDP: Stochastic Processing Networks To simplify notation: no routing control, splitting/merging or simultaneously serving more than one customer with the same resource A kj units of resource k required to perform activity j q k units of resource k R ij = 1 if activity j serves class i and 0 otherwise p ij = probability that a customer finishing service at class i will be routed to class j p i0 = probability that a customer finishing service at class i will depart Feasible actions: Au q, Ru x, u 0 Stability T min ρ effective arrival rate λ : λ = α + P λ M = diag(µ j ) s.t. Au ρq static allocation LP must have optimal solution ρ < 1 RMu = λ u 0 Possible transitions: arrivals, routing from i to j, departure M. Veatch INFORMS Austin 11

State Space Truncations Construct a sequence of MDPs with finite state spaces S N that increase to the whole state space, S Transitions out of S N are turned off or mapped to some state in S N Theorem (Sennott 1999) If solutions h to Bellman s equation are bounded above in N (and bounded below in x), then the sequence of finite state MDPs converges to the countable state MDP in average cost and policy Motivates truncating the space Naïve truncation: x i N =U N =1 Better: x i N i with N i larger for classes with arrivals Searching for accurate {N i } requires many runs The optimal policy on a truncated space may also limit queue lengths in classes without arrivals Can determine these limits from the policy, at least for series lines Veatch (2006) Other models have clearer truncations, e.g., max inventory level S N M. Veatch INFORMS Austin 12

Non-exponential Distributions Method of stages Service time is sum of exponentials (smaller coefficient of variation) Adding S stages multiplies number of states by S for each server Can uniformize to obtain discrete time Markov chain as before Embed DTMC at service completions (one non-exponential distr.) Number of arrivals in a period ~ Poisson. Increases number of transitions (nonzeros in P) Discretization: Sample at discrete times Number of potential events in a period ~ Poisson. Deterministic service times Machines often have nearly constant processing times plus failures Discretize. If service time = time step, no additional states are needed. Service times must be equal If service times are small integer ratios, service time = k(time step) Must keep track of age, multiplying number of states by k M. Veatch INFORMS Austin 13

Machine Failures α i = 1 if machine i is operational, 0 if failed Each unreliable machine doubles the number of states Approximate model: limit the number of failed states and adjust failure rate Veatch (2006) Use with deterministic service times: one source of variability M. Veatch INFORMS Austin 14

Speed-ups Avoid self-transitions when uniformizing, e.g., use maximum service rate of a server, not the sum of its service rates Initialize h from previous runs or a simulated policy Apply penalty at queue length boundary: If x i = N i and a transition increases x i, project the new state back to the boundary and apply a penalty computed from a previous run Asynchronous value iteration Numerical results for a make-to-stock series line with 5 unreliable machines. Runs with 12 million states took 7 hours Enhancements Iterations Time (sec) None 3300 357 Asynchronous VI 2900 258 Initialiazation 1100 93.6 Boundary penalty 800 72.8 Limited number of failures 800 36.9 M. Veatch INFORMS Austin 15

Overview Network models MDP formulations: features, efficient formulations Software Greedy policies Fluid model policies Monotone control: switching curve policies M. Veatch INFORMS Austin 16

Software: Generality Supported by NSF Grant DMI-0620787 Stochastic processing network structure is an input! Number of classes determines number of nested loops: creates n- dimensional counters. Outer loops use counter, inner loop just increments an integer Loop over feasible actions Arbitrary routing within the network: compute (P u h)(x) M. Veatch INFORMS Austin 17

Software: Usability Commented input file contains all data network structure: server-to-class mapping, routing arrival and service rates algorithm parameters, including queue length truncations Includes documentation Extensively tested and used at Gordon College; some testing by other researchers Access: Download and compile C++ code http://www.math-cs.gordon.edu/~senning/software/qnetdp-1.1.1.tar.gz (look on http://www.math-cs.gordon.edu/~senning/qnetdp to get the latest version). M. Veatch INFORMS Austin 18

Software: Speed Solved 6-class example with traffic intensity of 0.6 with truncation (40, 2, 40, 10, 4, 4) in ~1 hour Truncation has 1.4 million states, appears accurate to within 0.1% α 1 µ 1 µ 2 α 3 µ 3 µ 4 µ 5 µ 6 M. Veatch INFORMS Austin 19

Optimal Policy, 2 x 2 N c = (1,3), µ = (1,.5, 1), traffic intensity =.9 As arrival rate λ 1 increases, more helping is used 1 2 1 2 λ 1 = 0.9 λ 1 = 1 λ 1 = 1.1 serve class 2 serve class 1 (help) M. Veatch INFORMS Austin 20

Optimal Policy-: 3 x 3 Floater 1 2 3 1 2 3 Server 3: serve class 1 serve class 2 serve class 3 Resembles Longest Queue policy, particularly in states with small x i M. Veatch INFORMS Austin 21

Optimal Policy: 3 x 3 Chain Server 1 Server 2 Server 3 serve class 1 serve class 2 serve class 3 1 2 1 2 3 3 Server 1 gives nearly static priority to class 1 (highest cost) Server 3 often serves class 2 (middle cost), especially when x 1 is large Explanation: server 2 is serving class 1 in these states M. Veatch INFORMS Austin 22

Overview Network models MDP formulations: features, efficient formulations Software Greedy policies Fluid model policies Monotone control: switching curve policies M. Veatch INFORMS Austin 23

Greedy Policies greedy policy u~ ( x) = arg min u A( x) E u T ( c x( t + 1) x( t) = x) A network admits a pathwise optimal solution if, from any initial state, there is a policy that minimizes the cost rate at all times t with probability one Implies that greedy is optimal For linear cost c T x, greedy action depends only on which buffers are empty, so greedy is static priority for x > 0 Examples Multiclass queue: cµ rule Klimov s problem: single server with rework Other systems with certain parameter values M. Veatch INFORMS Austin 24

Establishing Greedy Optimality Stochastic coupling Walrand (1988) route to shortest queue Liu, Nain, Towsley (1995) sample path methods de Vericourt and Veatch (2003) make-to-stock queue Trajectories under two different policies are compared Show that one trajectory has smaller (or equal) cost rate at all times; may eventually merge with probability one Preservation by DP operator If the greedy policy is also h * -greedy, it is optimal u * ( x) = arg min E u A( x) ( h( x( t + 1)) x( t) = x) Show that this property is preserved by the DP operator. Then greedy is optimal for all time horizons. M. Veatch INFORMS Austin 25 u

A cµ Rule for the N System 1 2 1 2 Theorem. If c µ c 2 22 1µ 21 then the cµ rule is optimal. The cµ rule sets Server 2 s priorities. When x 1 = 1, resolve competition for class 1 customer using greedy. For the case in the theorem: Server 2 gives priority to class 2: fixed before help Uses the faster server when x = (1, 0) M. Veatch INFORMS Austin 26

A cµ Rule for m x m Two-tiered 1 2 3 1 2 3 Theorem. If servers can cooperate and cµ rule is optimal. ci µ ii c1µ i1, i = 2, K, m then the The cµ rule for this case: Server 2: priority to class 2 Server 3: priority to class 3 etc. M. Veatch INFORMS Austin 27

cµ Rules Parallel Flexible Servers Multiclass queue: Buyukkoc, Varaiya, and Walrand (1985) Time interchange argument Uses discretization (or discrete time model) so that arrivals are independent of service N system with fixed before help µ c 2 22 1µ 21 Down and Lewis (2008) Policy is preserved by DP operator Veatch (2008 w.p.) Stochastic coupling: processes merge, but policy is not just a time interchange W system with fixed before help and failures Saghafian, Van Oyen, and Kolfal (2009 w.p.) Policy is preserved by DP operator m m Two-tiered system with fixed before help Stochastic coupling: processes do not (always) merge, but expected future cost is equal c M. Veatch INFORMS Austin 28

When Are Greedy Policies Not Optimal? There must be a tradeoff between short-term and future costs Future costs are incurred by the greedy policy, relative to an optimal policy, only when a queue empties under the greedy policy, forcing the action to change e.g., idling a server The optimal policy uses safety stock, or buffering, to prevent future idleness The queue may empty because a slower server can t keep up or because of random service times µ 1 α router µ 2 α = 1, µ = (0.5, 1.5), c = (1, 1.5) M. Veatch INFORMS Austin 29

When to Consider Switching Curves First identify a tradeoff between short-term and future costs Does the future cost occur with sufficient probability to be significant? Sensitivities are near boundaries of the state space (small queues) the exact location of the switching curve usually doesn t matter Chen, Pandit and Meyn (2003) In search of sensitivity The queue may empty because a slower server can t keep up or because of random service times M. Veatch INFORMS Austin 30

Overview Network models MDP formulations: features, efficient formulations Software Greedy policies Fluid model policies Monotone control: switching curve policies M. Veatch INFORMS Austin 31

Fluid Model Replace processes by their mean rates. Continuous, deterministic, transient boundary : u~ ( t) = 0 if V *( x) Discrete feasibility (DFEAS) Cu~ ( t) e u~ ( t) 0 x ( t) = 0 = min c q t dt u ( ) 0 q& ( t) = Bu( t) + α q(0) where = x Fluid feasibility (FFEAS) (FFEAS) does not imply (DFEAS) Greedy for fluid differs from greedy for discrete only on boundaries There may be a translation feasibility problem T Cu( t) 1 u( t) 0 boundary : i [ Bu( t) + v( t)] i 0 if qi ( t) = 0 λ B = ( P I) M M = diag( µ ) i M. Veatch INFORMS Austin 32

Fluid Limits and Asymptotic Optimality For each work-conserving policy and x R n +, consider a sequence of processes x N with initial states Nx cumulative allocation: T i (t) = Existence of fluid limit Dai (1995) For a.e. sample path, there is a subsequence of 1 N ~ N ( x ( Nt), T ( Nt) ) that converges u.o.c. to (q(t), T(t)) satisfying N (FFEAS), the fluid dynamics, and u i (s)ds Asymptotic optimality Meyn (2000) If a stable policy exists, then there exists an optimal policy for the MDP whose fluid limits are optimal for the fluid. * Further, h ( θx) lim * = 1 θ V ( θx) The fluid limit may not be unique or easily constructed t 0 T i (t) = t 0 u i (s)ds. M. Veatch INFORMS Austin 33

Scaling the Policy scaled policy: u (x) = lim N u ( Nx 1,K, Nx n ) when limit exists fluid limit policy: Under mild conditions, the collection of fluid limit trajectories under a policy defines a fluid policy u(x) except at x where the control changes (the allocation T(t) is not differentiable) or the fluid limit is not unique Assume: 1) Unique fluid limits from all initial states (except a set of lower dim.) 2) Scalable: scaled policy exists and consists only of extreme points ( ) control switching sets (CSSs): region where a certain action is used u Theorem In the interior of a CSS of full dimension, the fluid limit policy matches the scaled policy. Corollary If a stable policy exists for the MDP, then there exists a discrete optimal policy whose scaled policy matches some fluid optimal policy in the interior of CSSs of full dimension. Asymptotic slopes of switching curves agree for some discrete optimal policy and fluid optimal policy. M. Veatch INFORMS Austin 34

Arrival Routing: Fluid and MDP Policy µ 1 α router µ 2 Case 1. α µ 2 α = 1, µ = (0.65, 0.65), c = (1, 2) Case 2. α < µ 2 α = 1, µ = (0.5, 1.5), c = (1, 1.5) M. Veatch INFORMS Austin 35

Fluid Switching Curves Summary Holds for other problems and higher dimensions: For linear cost sequencing and routing problems, the fluid policy defines asymptotic slopes of switching surfaces Fluid policies can be computed for small or simple problems Distinguish three cases: 1) Neither is greedy. Both have interior switching; fluid policy is close. Non-greedy policy because a server will fall behind. 2) Fluid is greedy. MDP has interior switching and is non-greedy to buffer against randomness. 3) Both are greedy. Fluid policy matches MDP policy except on boundary M. Veatch INFORMS Austin 36

Overview Network models MDP formulations: features, efficient formulations Software Greedy policies Fluid model policies Monotone control: switching curve policies M. Veatch INFORMS Austin 37

Monotone Control Optimal policies have switching curve form in most models: serve class i when x k s(x) and class j otherwise (x i, x j > 0) Generalizes to models with costs of service: optimal class i service rate is increasing (decreasing) in x j, known as monotone control Geometry of transitions Serve departing class i: transition x x e i will be decreasing in x j if h is submodular: h ( x + ei ) h( x) h( x + ei + e j ) h( x + e j ) Interpretation: a departure from queue j can turn class i service on but not off (increase service rate). More-more relationship Supermodular: h ( x + ei ) h( x) h( x + ei + e j ) h( x + e j ) A departure from queue j can turn class i service off but not on (increase service rate). More-less relationship M. Veatch INFORMS Austin 38

Submodularity-type Inequalities Can extend submodularity to other transitions d i and d j h x + d ) h( x) h( x + d + d ) h( x + d ) Veatch and Wein (1992) ( i i j j Altman et al. (2003) extensively study the most common transitions, i.e., d = e i e j for a customer moving from one class to another Convexity is often included with submodularity. It can be interpreted as: a transition occuring can turn that transition off but not on. Implications for switching curves Submodularity and convexity imply that the switching curve for a class i departures is monotonic in x j Similar limitations on switching curves for other transitions; see Veatch and Wein (1992) M. Veatch INFORMS Austin 39

Establishing Submodularity-type Inequalities Show that the inequalities (generally including convexity) hold for the cost rate and are preserved by the DP operator Many papers do this in an ad hoc fashion for one model Most difficulties occur at the state space boundaries; some general results in Weber and Stidham (1987) and Veatch and Wein (1992) The score space approach of Glasserman and Yao (1994) converts all transitions to e i but increases the dimension Koole (2007) provides a general framework using event-based DP: the DP operator is decomposed into simple operators, each of which is shown to preserve certain inequalities Induction on x i has been used on some problems Wu, Lewis, and Veatch (2005) M. Veatch INFORMS Austin 40

Conclusion Computing power and careful truncation, etc. make it possible to compute optimal control for interesting networks with several classes and additional features General software makes it convenient ( standard stochastic processing networks) Look for tradeoffs and main features of a model before numerically optimizing Policy visualization software makes it easier to look for policy insights Fluid analysis can give more information about the optimal policy: asymptotic slopes of switching curves Switching curve policies are pervasive and can be proven for some networks M. Veatch INFORMS Austin 41