An Integrated Optimizing-Simulator for the Military Airlift Problem

Size: px
Start display at page:

Download "An Integrated Optimizing-Simulator for the Military Airlift Problem"

Transcription

1 An Integrated Optimizing-Simulator for the Military Airlift Problem Tongqiang Tony Wu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ Warren B. Powell Department of Operations Research and Financial Engineering Princeton University Princeton, NJ Alan Whisman Air Mobility Command Scott Air Force Base March 18, 2007

2 Abstract There have been two primary modeling and algorithmic strategies for problems in military logistics: simulation, offering tremendous modeling flexibility, and optimization, which offers the intelligence of math programming. Each offers significant theoretical and practical advantages. In this paper, we use the framework of approximate dynamic programming which produces a form of optimizingsimulator which offers a spectrum of models and algorithms which are differentiated only by the information content of the decision function. We claim that there are four information classes, allowing us to construct decision functions with increasing degrees of sophistication and realism as the information content of a decision is increased. Using the context of the military airlift problem, we illustrate the succession of models, and demonstrate that we can achieve improved performance as more information is used.

3 The military airlift problem, faced by the analysis group at the Air Mobility Command (AMC), deals with effectively routing a fleet of aircraft to deliver loads of people and goods (troops, equipment, food and other forms of freight) from different origins to different destinations as quickly as possible under a variety of constraints. In the parlance of military operations, loads (or demands ) are referred to as requirements. Cargo aircraft come in a variety of sizes, and it is not unusual for a single requirement to need multiple aircraft. If the requirement includes people, the aircraft has to be configured with passenger seats. There are two major classes of models that have been used to solve the military airlift (and closely related sealift) problem: optimization models (Morton et al. (1996), Rosenthal et al. (1997), Baker et al. (2002)), and simulation models, such as MASS (Mobility Analysis Support System) and AMOS, which are heavily used within the AMC. The analysis group at AMC has preferred AMOS because it offers tremendous flexibility, as well as the ability to handle uncertainty. However, simulation models require that the user specify a series of rules to obtain realistic behaviors. Optimization models, on the other hand, avoid the need to specify various decision rules, but they sharply limit the ability to represent complex operational rules, as well as the ability to incorporate uncertainty. There have been several efforts to incorporate uncertainty using stochastic programming (see, for example, Goggins (1995) and Morton et al. (2002)). However, these have generally been limited to very small problems, and do not improve the ability of optimization models to handle complex operational details. Stochastic programming models appear to be best suited to robust design problems rather than operational problems. One of the biggest weaknesses of simulation models is actually also a strength. Simulation models such as AMOS require that decisions (for example, which aircraft should cover a load of freight) be made using rule-based logic. For problems in transportation and logistics, designing rules that determine which aircraft should cover a load can get extremely complex (in the case of AMOS, the rules are fairly simplistic, and as a result can produce odd results). However, the rules are easy to understand and modify. By contrast, optimization models require that the behavior of a model be governed by various costs and penalties (for example, to govern the tradeoff between flying distance versus using the best type of cargo aircraft for a particular load). Optimization models have been widely studied in the transportation science community (see Crainic & Laporte (1997) for an excellent survey; other examples include Cordeau et al. (1998), 1

4 Rexing et al. (2000), and Joborn et al. (2004), although the list is extremely long). Optimization models are often promoted because the solutions are optimal (or in some sense better than what is happening in practice); simulation models lay claim to being more realistic, exploiting their ability to capture a high level of detail. In practice, the real value of optimization models is not in their solution quality, but rather in their ability to respond with intelligence to new datasets. Simulation models are driven by rules, and it is often the case that new sets of rules have to be coded to produce desirable behaviors when faced with new datasets. Optimization models often behave very reasonably when given entirely new datasets, but this behavior depends on the calibration of a cost model which typically has to combine hard dollar costs (such as the cost of fuel) against soft costs which are introduced to encourage desirable behaviors. As a result, optimal only takes meaning relative to a particular objective function; it is not hard for an experienced expert to look at a solution and claim it may be optimal but it is wrong! Of course, experienced modelers have long recognized that optimization models are only approximations of reality, but this has not prevented claims of savings based on the results of an optimization model. In this paper, we introduce the concept of the optimizing-simulator and illustrate how it works for airlift mobility problems. The optimizing simulator uses a decision function which might depend on up to four classes of information. The four classes of information include: 1) Knowledge (what we know about the system at time t), 2) forecasts of exogenous information events (new customer demands, equipment failures, weather delays), 3) forecasts of the impact of decisions now on the future (giving arise to value functions used in dynamic programming), and 4) forecasts of decisions (which we represent as patterns). The last three classes of information are all a form of forecast. If we just use the first class, we get a classical simulation using a myopic policy. If we use the second class, we obtain a rolling-horizon procedure. The third class of information uses approximate dynamic programming to estimate the value of being in a state. For some problem classes, this has been shown to produce near-optimal solutions (Godfrey & Powell (2002)). The fourth class allows us to combine cost-based logic (required for any optimization model) with particular types of rules (for example, we prefer to use C-17 s for loads originating in Europe ). This class introduces the use of proximal point algorithms. Both the third and fourth classes requires that the simulation be run iteratively. The goal of our research is to bring the simulation and optimization communities together in a single framework. Our strategy steps forward in time, making it possible to introduce uncertainty, 2

5 as well as complex operational constraints just as any simulation model. The difference is that we use cost-based logic combined with iterative learning (if we use the third and fourth classes of information) to obtain optimizing behavior. recommendation made in Schank et al. (1994): This research provides the substance behind a For the future, we recommend research into the integration of the KB (Knowledge Based) and the MP (Math Programming) procedures. The optimization approach directly determines the least-cost set of assets but does so with some loss of detail and by assuming that the world operates in an optimal fashion. KB simulations capture details and provide more realistic representations of real-world activities but do not currently balance trade-offs among transport assets, procedures, and cost. We present a model of the military airlift problem based on the DRTP (dynamic resource transformation problem) notational framework of Powell et al. (2001). We use the concept of an optimizing-simulator for making decisions, where we propose a spectrum of decision functions by controlling the amount of information available to the decision function. The DRTP notational framework provides a natural bridge between optimization and simulation by providing some subtle but critical notational changes that make it easier to model the information content of a decision. In addition, the DRTP modeling paradigm provides for a high level of generality in the representation of the underlying problem. This paper makes the following contributions: The airlift analysis community uses either simulation-based models, which can handle a high level of detail, or optimization models, which offer better solutions. model which combines these two fields. We provide the first We introduce, for the first time, the idea of simulating with up to four classes of information, creating a continuum of models that spans simulation and optimization, combining both cost-based and rule-based logic. Using an actual airlift problem, we demonstrate the impact of increasing the information available to a decision on both objective function and throughput (a key military measure). We show how we can incorporate simple rules in the form of low-dimensional patterns, allowing users to influence the behavior of simulation without having to change the model. 3

6 We show how to solve the air-base congestion problem in a simulation-based setting, avoiding the bias of first-come, first-served rules. We begin in section 1 with a summary of the history of mobility modeling in the military. In section 2, we briefly review the NRMO model, the AMOS model and the SSDM model. Our formulation of the military airlift problem, the optimizing-simulator, is presented in section 3. Section 4 describes a spectrum of decision functions, starting with the rule-based logic used in the AMOS simulator, through a series of cost-based policies that reflect the second and third class of information, and closing with a hybrid cost-based, rule-based policy that includes the fourth information class. In section 5 we simulate all the different information classes, and show that as we increase the information (that is, use additional information classes) we obtain better solutions (measured in terms of throughput, a common measure used in the military) and realism (reflected by our ability to match desired patterns of behavior). Section 6 concludes the paper. 1 The history of mobility modeling Ferguson & Dantzig (1955) is one of the first to apply mathematical models to air-based transportation. Subsequently, numerous studies were conducted on the application of optimization modeling to the military airlift problem. Mattock et al. (1995) describe several mathematical modeling formulations for military airlift operations. The RAND Corporation published a very extensive analysis of airfield capacity in Stucker & Berg (1999). According to Baker et al. (2002), research on air mobility optimization at the Naval Postgraduate School (NPS) started with the Mobility Optimization Model (MOM). This model is described in Wing et al. (1991) and concentrates on both sealift and airlift operations. Therefore, the MOM model is not designed to capture the characteristics specific to airlift operations, but it is a good model in the sense that it is time-dependent. THRUPUT, a general airlift model developed by Yost (1994), captures the specifics of airlift operations but is static. The United States Air Force Studies and Analysis Agency in the Pentagon asked NPS to combine the MOM and THRUPUT models into one model that would be time dependent and would also capture the specifics of airlift operations (Baker et al. (2002)). The resulting model is called THRUPUT II, described in detail in Rosenthal et al. (1997). During the development of THRUPUT II at NPS, a group at RAND developed a similar model called CONOP (CONcept of OPerations), described in Killingsworth & Melody (1997). The THRUPUT II and CONOP models each possessed several features that the other lacked. Therefore, the Naval Postgraduate 4

7 School and the RAND Corporation merged the two models into NRMO (NPS/RAND Mobility Optimizer), described in Baker et al. (2002). Crino et al. (2002) introduced the group theoretic tabu search method to solve the theater distribution vehicle routing and scheduling problem. Their heuristic methodology evaluates and determines the routing and scheduling of multi-modal theater transportation assets at the individual asset operational level. A number of simulation models have also been proposed for airlift problems (and related problems in military logistics). Schank et al. (1991) reviews several deterministic simulation models. Burke et al. (2002) at the Argonne National Laboratory developed a model called TRANSCAP (Transportation System Capability) to simulate the deployment of forces from Army bases. The heart of TRANSCAP is the discrete-event simulation module developed in MODSIM III. Perhaps the most widely used model at the Air Mobility Command is AMOS (Mobility Analysis Support System) which is a discrete-event worldwide airlift simulation model used in strategic and theater operations to deploy military and commercial airlift assets. It once was the standard for all airlift problems in AMC and all airlift studies were compared with the results produced by the AMOS model. AMOS provides for a very high level of detail, allowing AMC to run analyses on a wide variety of issues. One feature of simulation models is their ability to handle uncertainty, and as a result there has been a steady level of academic attention toward incorporating uncertainty into optimization models. Dantzig & Ferguson (1956) is one of the first to study uncertain customer demands in the airlift problem. Midler & Wollmer (1969) also takes into account stochastic cargo requirements. They formulated two two-stage stochastic linear programming models a monthly planning model and a daily operational model for the flight scheduling. Goggins (1995) extended Throughput II (Rosenthal et al. (1997)) to a two-stage stochastic linear program to address the uncertainty of aircraft reliability. Niemi (2000) expands the NRMO model to incorporate stochastic ground times through a two-stage stochastic programming model. To reduce the number of scenarios for a tractable solution, they assume that the set of scenarios is identical for each airfield and time period and a scenario is determined by the outcomes of repair times of different types of aircraft. Their resulting stochastic programming model has an equivalent deterministic linear programming formulation. Granger et al. (2001) compared the simulation model and the network approximation model for the impact of stochastic flying times and ground times on a simplified airlift network (one origin aerial port of embarkation (APOE), three intermediate airfields and one destination 5

8 aerial port of debarkation (APOD)). Based on the study, they suspect that a combination of simulation and network optimization models should yield much better performance than either one of these alone. Such an approach would use a network model to explore the variable space and identify parameter values that promise improvements in system performance, then validate these by simulation, Morton et al. (2002) developed the Stochastic Sealift Deployment Model (SSDM), a multi-stage stochastic programming model to plan the wartime, sealift deployment of military cargo subject to stochastic attack. They used scenarios to represent the possibility of random attacks. A related method in Yost & Washburn (2000) combines linear programming with partially observable Markov decision processes to a military attack problem in which aircraft attack targets in a series of stages. They assume that the expected value of the reward (destroyed targets) and resource (weapon) consumption are known for each policy where a policy is chosen in a finite feasible set. Their method requires that the number of possible states of each object is small and the resource constraints are satisfied on the average. In the military airlift problem, the number of possible states is very large as well as that of the actions and outcomes are very large too. As a result, we could not have the expected value of the reward before we solve the problem. In general, we require that the resource constraints are satisfied strictly in a military airlift problem. 2 A comparison of models for the military airlift problem In this section, we review the characteristics of the NRMO and AMOS models to compare linear programming with simulation, where a major difference is the modeling of information and the ability to handle different forms of complexity. We also compare a stochastic programming model, which represents one way of incorporating uncertainty into an optimization model. Following this discussion, section 3 presents a model of the military airlift problem based on a modeling framework presented in Powell et al. (2001). At the end of section 3, we compare all four modeling strategies. 2.1 The NRMO Model The NRMO model has been in development since 1996 and has been employed in several airlift studies. For a detailed review of the NRMO model, including a mathematical description, see Baker et al. (2002). The goal of the NRMO model is to move equipment and personnel in a timely fashion from a set of origin bases to a set of destination bases using a fixed fleet of aircraft with differing 6

9 characteristics. This deployment is driven by the movement and delivery requirements specified in a list called the Time-Phased Force Deployment Data set (TPFDD). This list essentially contains the cargo and troops, along with their attributes, that must be delivered to each of the bases of a given military scenario. Aircraft types for the NRMO runs reported by Baker et al. (2002) include C-5, C-17, C-141, Boeing 747, KC-10 and KC-135. Different types of aircraft have different features, such as passenger and cargo-carrying capacity, airfield capacity consumed, range-payload curve, and so on. The range-payload curve specifies the weight that an aircraft can carry given a distance traveled. These range-payload curves are piecewise linear concave. The activities in NRMO are represented using three time-space networks: the first one flows aircraft, the second the cargo (freight and passengers) and the third flows crews. Cargo and troops are carried from onload aerial port of embarkation (APOEs) to either offload aerial port of debarkation (APODs) or forward operating bases (FOBs). Certain requirements need to be delivered to the FOBs via some other means of transportation after being offloaded in the APODs. Some aircraft, however, can bypass the APODs and deliver directly to the FOBs. Each requirement starts at a specific APOE dictated by the TPFDD and then is either dropped off at an APOD or a FOB. NRMO makes decisions about which aircraft to assign to which requirement, which freight will move on an aircraft, which crew will handle a move, as well as variables that manage the allocation of aircraft between roles such as long range, intertheatre operations and shorter, shuttle missions within a theatre. The model can even handle the assignment of KC-10 s between the role of strategic, cargo hauler and mid-air refueler. In the model, NRMO minimizes a rather complex objective function that is based on several costs assigned to the decisions. There are a variety of costs, including hard operating costs (fuel, maintenance) and soft costs to encourage desirable behaviors. For example, NRMO assigns a cost to penalize deliveries of cargo or troops that arrive after the required delivery date. This penalty structure charges a heavier penalty the later the requirement is delivered. Another term penalizes the cargo that is simply not delivered. The objective also penalizes reassignments of cargo and deadheading crews. Lastly, the objective function offers a small reward for planes that remain at an APOE as these bases are often in the continental US and are therefore close to home bases of most planes. The idea behind this reward is to account for uncertainty in the world of war and to 7

10 keep unused planes well positioned in case of unforeseen contingencies. The constraints of the model can be grouped into seven categories: (1) demand satisfaction; (2) flow balance of aircraft, cargo and crews; (3) aircraft delivery capacity for cargo and passengers; (4) the number of shuttle and tanker missions per period; (5) initial allocation of aircraft and crews; (6) the usage of aircraft of each type; and (7) aircraft handling capacity at airfields. A general statement of the model (see Baker et al. (2002) for a detailed description) is given by: min x c t x t t (1) subject to t A t x t t τ,t x t τ = ˆR t t (2) τ=1 B t x t u t t (3) x t 0 t (4) Here t represents when an activity starts and τ is the time required to complete an action. The matrices A t and t τ,t handle system dynamics. B t captures possible coupling constraints that run across the flows of aircraft, cargo and crews. u t is the upper bound on flows, and ˆR t models the arrival of new resources to be managed. x t is the decision vector, and c t is the cost vector corresponding to decision vector x t. NRMO is implemented with the algebraic modeling language GAMS (Brooke et al. (1992)), which facilitates the handling of states. There are a number of sets and subsets in NRMO, such as sets of time periods, requirements, cargo types, aircraft types, bases, and routes. To make a decision, one element of each suitable set is pulled out to form an index combination, such as an aircraft of a certain type delivering a certain requirement on a certain route departing at a certain time. They screen the infeasible index combinations to achieve a tractable computation. The information in the NRMO model is captured in the TPFDD, which is known at the beginning of the horizon. NRMO requires that the behavior of the model be governed by a cost model (which simulators do not require). The use of a cost model minimizes the need for extensive tables of rules to produce good behaviors. The optimization model also responds in a natural way to changes in 8

11 the input data (for example, an increase in the capacity of an airbase will not produce a decrease in overall throughput). But linear programming formulations suffer from weaknesses. A significant limitation is the difficulty in modeling complex system dynamics. For example, a simulation model can include logic such as if there are four aircraft occupying all the parking spaces, a fifth aircraft will have to be pulled off the runway where it cannot be refueled or repaired. In addition, linear programming models cannot directly model the evolution of information. This limits their use in analyzing strategies which directly affect uncertainty (What is the impact of last-minute requests on operational efficiency; What is the cost of sending an aircraft through an airbase with limited maintenance facilities where the aircraft might break down?). 2.2 The AMOS Model The AMOS model is a rule-based simulation model. Let S t = The state vector of the system at time t, giving what we know about the system at time t, X π (S t ) = The function that returns a decision x t given information S t when we are using policy π, x t = The vector of decisions, ω t = Exogenous information arriving between t 1 and t, S M (S t, x t, ω t+1 ) = The transition function (sometimes called the system model) that gives the state S t+1 at time t + 1 given the we are in state S t, make decision x t, and observe new exogenous information ω t+1. The policy π in the AMOS model requires a sequence of feasibility checks. In the TPFDD, the AMOS model picks the first available requirement and checks the first available aircraft to see whether this aircraft can deliver the requirement. In this feasibility check, the AMOS model examines whether the aircraft can handle the size of the cargo in the requirement, as well as whether the en-route and destination airbases can accommodate the aircraft. If the aircraft cannot deliver the requirement, the AMOS model checks the second available aircraft, and so on, until it finds an aircraft that can deliver this requirement. If the aircraft can deliver the requirement, it will serve the requirement. When the AMOS model finishes the first requirement, it moves to the second requirement. It continues this procedure for the requirement in the TPFDD until it finishes all the requirements. This policy is a rule-based policy, which does not use a cost function to make the decision. 9

12 The evolution of states and decisions in the AMOS model is as follows. At time t, the state of the system, S t, is known. Employing the policy π described above to make decisions given state S t, produces the decision x t. After the decision is made at time t, and the new information at time t + 1, ω t+1, has arrived, the system moves to a new state S t+1 at time t + 1. Thus, the AMOS model can be mathematically presented as: x t X AMOS (S t ) S t+1 S M (S t, x t, ω t+1 ), (5) where the policy π = AMOS refers to a series of rules that govern which aircraft should cover each requirement. 2.3 The SSDM Model The Stochastic Sealift Deployment Model (SSDM) (Morton et al. (2002)) is a multi-stage stochastic mixed-integer program designed to hedge against potential enemy attacks on seaports of debarkation (SPODs) with probabilistic knowledge of the time, location and severity of the attacks. They test the model on a network with two SPODs (the number actually used in the Gulf War) and other locations such as a seaport of embarkation (SPOE) and locations near SPODs where ships wait to enter berths. To keep the model computationally tractable, they assumed only a single biological attack can occur. Thus, associated with each scenario is whether or not an attack occurs, and if it does occur, the time t ω at which it occurs. We then let T ω = {1,..., t ω 1} be the set of time periods preceding an attack which occurs at time t ω. This structure results in a scenario tree that grows linearly with the number of time periods, as illustrated in Figure 1. The stochastic programming model for SSDM can then be written: min x p ω c tω x tω ω t (6) subject to : t A t x tω t τ,t x t τ,ω = ˆR t t, ω τ=1 B t x tω u tω t, ω x tω = x tω t T ω T ω (7) x tω 0 and integer. t, ω 10

13 t 1 No attack Attack t 2 Time t 3 t 4 Figure 1: The single-attack scenario tree used in SSDM Here ω is the index for scenarios. p ω is the probability of scenario ω. t, τ, A t, t τ,t, B t and ˆR t are the same as introduced in the NRMO model. u tω is the upper bound on flows where the capacities of SPOD cargo handling are varied under different attack scenarios. c tω is the cost vector and x tω is the decision vector. The non-anticipativity constraints (7) say that if scenarios ω and ω share the same branch on the scenario tree at time t, the decision at time t should be same for those scenarios. In general, the stochastic programming model is much larger than a deterministic linear programming model and still struggles with the same difficulty in modeling complex system dynamics. One has to design the scenarios carefully to avoid the exponential growth of scenarios in the stochastic programming model. The value of the stochastic programming formulation is that it provides for the analysis of certain forms of uncertainty (such as which SPOD or SPODs might be attacked and the nature of the attack) so that robust strategies can be derived. Such problems do not require a massive number of scenarios, and do not call for a high level of detail in the model. 2.4 Comparison of NRMO, AMOS, SSDM and O-S models The characteristics of the NRMO, AMOS, SSDM and O-S models are listed in Table 1. In this table, we are focusing on the model (that is, the representation of the physical problem) rather than the algorithm used to solve the problem. The primary distinguishing feature of these models is how they have captured the flow of information, but they also differ in areas such as model flexibility, and the responsiveness to changes in input data. The O-S representation can produce a linear programming model such as NRMO if we ignore evolving information processes, or a simulation model such as AMOS if we explicitly model the TPFDD as an evolving information process and code the appropriate rules for making decisions. As such, the O-S representation 11

14 Model NRMO AMOS SSDM O-S Category Large-scale linear Simulation Multi-stage Optimizingsimulator programming stochastic programming Information Requires knowing Assumes actionable Actionable time General modeling processes all information time equals equals knowable of knowable and within T at time knowable time. time. actionable time. 0. Cannot distinguish May but doesn t At time t, know between distinguish be- the information knowable time t tween knowable that is actionable and actionable time t and actionable at time t t. time t t. time t t. Attribute Multicommodity Multi-attribute Homogeneous General resource Space flow (attribute (location, fuel ships and cargo, includes aircraft level, maintenance) extendable to type and location) multiple ship types Complexity Linear systems of Complex system Simple linear systems Complex system of system equations dynamics of equations dynamics dynamics Number of Single May be multiple Multiple May be multiple Scenarios Decision Cost-based Rule-based Cost-based Span from rulebased Selection to cost- based Information Modeling Assumes that everything is known Model Behavior Reacts to data changes intelligently, but not necessarily robustly across random events Myopic, local information Noisy response to changes in input data Assumes knowing the probability distribution of scenarios. Similar to LP, but produces robust allocations Table 1: Characteristics of NRMO, AMOS, SSDM and O-S models General modeling of information Can react with intelligence; will display some noise characteristic of simulation; robust provides a mathematical representation that spans optimization and simulation. 3 Modeling the military airlift problem In the military airlift problem, since information arrives continuously over time and decisions are made at discrete instances in time, time is modeled as depicted in Figure 2. Time t = 0 is special 12

15 R 0,x 0,R 0 x x R ˆt,R t,x t,r t t=1 t t+1 t=0 t=1 t t+1 Figure 2: The modeling of time and the system evolution in that it represents here and now with the information that is available now. The discrete time period t refers to the time interval from t 1 to t. This means that any variable indexed by t has access to all the information up to time t, including the information that arrived during the time interval between t 1 and t. We use the superscript A to denote the flavor of aircraft and superscript R to denote that of requirements. To model the military airlift problem, we define: Network variables: J = The set of airbases to which the aircraft can go. These airbases are located all over the world. Airbase j J. T = The horizon of the simulation. T = {0, 1,..., T }, the discrete set of time periods that we simulate. Time t T. T ph t = The set of time periods within a planning horizon for a problem being solved at t. Resources: A = The attribute space of resources (aircraft and requirements). A A = The attribute space of aircraft. A R = The attribute space of requirements. a/a = The notation for attribute vector of resources. If a A A, then a is the attribute vector of an aircraft. If a A R, then a is that of a requirement. As an example, C 17(aircraft type) 2(requirement ID) a = 50 T ons(capacity) EDAR(current airbase) AA, a = 200 T ons(weight) KBOS(origin airbase) AR. 40 T ons(loadedweight) OEDF (destination) 13

16 R tt a = The number of resources that are knowable at the end of time t and before the decision at time t is made, and will be actionable with attribute vector a A at time t t. Here, t is the knowable time and t the actionable time. For the aircraft, the unit of R is the pounds of capacity. For requirements, the unit is the pounds of goods. R tt = (R tt a ) a A = (R A tt, R R tt ) R t = (R tt ) t t = (R A t, R R t ) Exogenous information: W t = The exogenous information processes at time t T. ˆRtt a in the following is an example of W t. Ω = The set of possible outcomes of exogenous information W t, t T. ω = One sample of the exogenous information (ω Ω). ˆR tt a = The number of new resources that first become known during time t, and will first become actionable with attribute vector a A at time t t. ˆR tt = ( ˆR tt a ) a A = ( ˆR A tt, ˆR R tt ) ˆR t = ( ˆR tt ) t t = ( ˆR A t, ˆR R t ) Decisions: D A,c = {d c aa, a A A, a A R }. The set of decisions to couple aircraft to requirements (that is load or pick up). D A,m = {d m aj, a AA, j J }. The set of decisions to move aircraft to airbases. d φ = The decision to do nothing (hold). D A = D A,c D A,m d φ. The set of decisions that can be applied to aircraft. Da A = The set of decisions that can be applied to an aircraft with attribute vector a A A. Decision functions and policies: x tad = The number of times that we apply a decision d D A a to an aircraft with attribute vector a A A during time t. 14

17 δ tt a (t, a, d) = x t = {x tad } a A A,d Da A 1 If decision d acting on an aircraft with attribute vector a A A at time t produces an aircraft with attribute vector a A A at time t 0 Otherwise I t = The information used to make a decision at time t. Π = The set of policies. Policy π Π. X π t (I t ) = Under policy π Π, the function that determines the set of decisions x t at time t based on the information I t. It is common to define S t as the state of the system which captures all the information needed to make a decision. In this paper, we let S t represent the state of knowledge (what we know about the system now). By contrast, I t, which is the information we use to make a decision, may include up to three types of forecast: forecasts of exogenous information (e.g. new customer demands), forecasts of the impact of a decision now on the future (captured as value functions from dynamic programming), and forecasts of decisions (represented in the form of low-dimensional patterns). Each of these information classes is described in greater detail below. The post-decision variables R x tt a = The number of resources that are knowable at the end of time period t, after the decision x t is made, and will be actionable with attribute vector a A at time t. It is the post-decision state variable at time t. Rtt x = (R x tt a ) a A = (R A,x tt, R R,x tt ) Rt x = (Rtt x ) t t = (R A,x t, R R,x t ) Contributions (Negative Costs): c tad = The contribution of applying decision d to an aircraft with attribute a A A at t. c t = {c tad } a A A,d D A a C t (x t ) = The total contribution due to x t in time period t. If the contribution is linear, C t (x t ) = c t x t = a A A d D A a c tadx tad. Note that in the remainder of the paper, we use costs and contributions interchangeably. We use costs to refer to negative contributions and contributions to refer to negative costs. The decisions 15

18 x t are returned by our decision function X π t (I t ), so our challenge is not to choose the best decisions but rather to choose the best policy π, where π Π is the family of policies (decision functions) from which we can choose. At this moment, the information at time t consists of the resources R t and contributions C t, that is I t = (R t, C t ). Thus, our optimization problem is given by: [ ] max E π Π t T C t (X π t (R t )) At each time t, we have to choose x t X t, t T, where X t is the feasible set at time t: (8) x tad = R tta a A A (9) d D A a x tad R tta a A R (10) a A A :d=d c aa x tad 0 a A A, d Da A (11) Rtt x a = R tt a + δ tt a (t, a, d)x tad t > t, a A A (12) a A A d Da A Rtt x a = R tt a + 1 {t =t+1} R tta a A A :d=d c aa x tad t > t, a A R (13) R t+1,t a = Rx tt a + ˆR t+1,t a t > t, a A A A R (14) Equation (9) is the flow balance constraint for aircraft. Equation (10) is the requirement bundling constraint which states that the total weight of a requirement is not less than the sum of the weights of that requirement assigned to aircraft. It is this constraint that causes the model to not be a network flow problem. Equation (11) is the non-negativity constraint for the aircraft. Equation (12), (13) and (14) are system dynamics. Equation (12) (for aircraft) and Equation (13) (for requirements) mean that the post-decision system state at time t (after the decision at time t is made) consists of what we know about the status of resources for time t at the end of time period t before the decision is made and the endogenous changes (the decisions) to resources for time t that are made at time period t. Equation (14) means that the system state at time t + 1 before the decision at time t + 1 is made consists of the post-decision state at time t and the exogenous changes to resources for time t that arrives during time t + 1. Rewriting the system dynamics (12) and (14) for aircraft in matrix notation, we have: R x t = (R tt ) t >t + t x t (15) R t+1 = R x t + ˆR t+1 (16) 16

19 Where t is a matrix with δ tt a (t, a, d) t > t, a A, a A A, d D A a positions. as elements of appropriate At the beginning of time period t, we have the post decision resources, Rt 1 x, from the end of previous time period. As new information at time t, ˆRt, arrives, we have the pre-decision system state R t. We make decisions x t over R t. After decisions are made, the post decision resources, R x t, will be available at the end of time period t. Thus, the evolution of information as in Figure 2 is {R 0, x 0, R x 0,..., ˆR t 1, R t 1, x t 1, R x t 1, ˆR t, R t, x t, R x t,...} where, { ˆR 1,..., ˆR t 1, ˆR t,...} is a sample realization of the random processes {W t : t T }. 4 A spectrum of decision functions To build a decision function, we have to specify the information that is available to us when we make a decision. There are four fundamental classes of information that we may use in making a decision: Knowledge (K t )- Exogenously supplied information on the state of the system (resources and parameters). Forecasts of exogenous information processes (Ω t ) - Forecasts of future exogenous information. Ω t can be a set of future realizations of information (which produces a stochastic model), or it may have a single realization (when we are using a point forecast of the future). Forecasts of the impact of decisions on other subproblems (V t ) - These are functions which capture the impact of decisions made in one point in time on another. Forecasts of decisions (ˆρ) - Patterns of behavior derived from a source of expert knowledge. These represent a form of forecast of decisions at some level of aggregation. Our view of information, then, can be viewed as one class of exogenously supplied information (data knowledge), and three classes of forecasts. In the remainder of this section, we provide examples and explain the general notation given above. For each information set, there are different classes of decision functions (policies). Let Π be the set of all possible decision functions that may be specified. In this paper, we investigate five 17

20 subclasses of Π: rule-based policies, myopic cost-based policies, rolling horizon policies, approximate dynamic programming policies and expert knowledge policies. These can be classified into three broad groups: rule-based, cost-based and hybrid. The policies we consider are as follows: Rule-based policies: Π RB = The class of rule-based policies (missing information on costs). Cost-based policies: Π MP = The class of myopic cost-based policies (includes cost information, but limited to information that is knowable now). Π RH = The class of rolling horizon policies (includes forecasted information). Note that if Ω = 1, we get a classical deterministic, rolling horizon policy which is widely used in dynamic settings. Π ADP = The class of approximate dynamic programming policies. We use a functional approximation to capture the impact of decisions at one point in time on the rest of the system. Hybrid policies: Π EK = The class of policies that use expert knowledge. This is represented using low-dimensional patterns (such as avoid sending C-5B s on routes through India ) expressed in the vector ˆρ. Policies in Π EK combine rules (in the form of low-dimensional patterns) and costs, and therefore represent a hybrid policy. In the sections below, we describe the information content of different policies. short-hand notation is used: The following RB - Rule-based (the policy uses a rule rather than a cost function). All policies that are not rule-based use an objective function. MP - Myopic policy which uses only information that is known and actionable now. R - A single requirement. RL - A list of requirements. A - A single aircraft. 18

21 AL - A list of aircraft. KNAN - Known now, actionable now - These are policies that only use resources (aircraft and requirements) that are actionable now. KNAF - Known now, actionable future - These policies use information about resources that are actionable in the future. RH - Rolling horizon policy, which uses forecasts of activities that might happen in the future. ADP - Approximate dynamic programming - this refers to policies that use an approximation of the value of resources (in particular, aircraft) in the future. EK - Expert knowledge - these are policies that use patterns to guide behavior. These abbreviations are used to specify the information content of a policy. The remainder of the section describes the spectrum of policies that are used in the optimizing-simulator. 4.1 Rule-based policies Our rule-based policy is denoted (RB:R-A) (rule-based, one requirement and one aircraft). In the Time-Phased Force Deployment Data (TPFDD), we pick the first available requirement and check the first available aircraft to see whether this aircraft can deliver the requirement. In this feasibility check, we examine whether the aircraft can handle the size of the cargo in the requirement, as well as whether the en-route and destination airbases can accommodate the aircraft. If the aircraft cannot deliver the requirement, we check the second available aircraft, and so on, until we find an aircraft that can deliver this requirement. If it can, we up-load the requirement to that aircraft, move that aircraft through a route and update the remaining weight of that requirement. After the first requirement is finished, we move to the second requirement. We continue this procedure for the requirement in the TPFDD until we finish all the requirements. See Figure 3 for an outline of this policy. This policy is a rule-based policy, which does not use a cost function to make the decision. We use the information set I t = R tt in policy (RB:R-A), that is, only actionable resources at time t are used for the decision at time t. 19

22 Step 1: Pick the first available remaining requirement in the TPFDD. Step 2: Pick the first available remaining aircraft. Step 3: Do the following feasibility check: Can the aircraft handle the requirement? Can the en-route and destination airbases accommodate the aircraft? Step 4: If the answers are all yes in Step 3, deliver the requirement by that aircraft through a route chosen from the route file and update the remaining weight of that requirement. Step 5: If that requirement is not finished, go to Step 2. Step 6: If there are remaining requirement, go to Step 1. Figure 3: Policy (RB:R-A) 4.2 Myopic cost-based policies In this section, we describe a series of myopic, cost-based policies which differ in terms of how many requirements and aircraft are considered at the same time. Policy (MP:R-AL) In policy (MP:R-AL) (myopic policy, one requirement and a list of aircraft), we choose the first requirement that needs to be moved, and then create a list of potential aircraft that might be used to move the requirement. Now we have a decision vector instead of a scalar (as occurred in our rule-based system). As a result, we now need a cost function to identify the best out of a set of decisions. Given a cost function, finding the best aircraft out of a set is a trivial sort. However, developing a cost function that captures the often unstated behaviors of a rule-based policy can be a surprisingly difficult task. There are two flavors of policy (MP:R-AL), namely known now and actionable now (MP:R- AL/KNAN) as well as known now, actionable in the future (MP:R-AL/KNAF). In the policy (MP:R-AL/KNAN), the information set I t = (R tt, C t ) is used and in the policy (MP:R-AL/KNAF), the information set I t = ((R tt ) t t, C t ) = (R t, C t ) is used. We explicitly include the costs as part of the information set. Solving the problems requires that we sort the aircraft by the contribution and choose the one with the highest contribution. Policy (MP:RL-AL) 20

23 Requirements Aircrafts Requirements Aircrafts Requirements Aircrafts 4a: Policy (RB:R-A) 4b: Policy (MP:R-AL) 4c: Policy (MP:RL-AL) Figure 4: Illustration of the information content of myopic policies. (a) A rule-based policy considers only one aircraft and one requirement at a time. (b) The policy (MP:R-AL) considers one requirement but sorts over a list of aircraft. (c) The policy (MP:RL-AL) works with a requirement list and an aircraft list all at the same time. This policy is a direct extension of the policy (MP:R-AL). However, now we are matching a list of requirements to a list of aircraft, which requires that we solve a linear program instead of a simple sort. The linear program (actually, a small integer program) has to balance the needs of multiple requirements and aircraft. There are again two flavors of policy (MP:RL-AL), namely known now and actionable now (MP:RL-AL/KNAN) as well as known now, actionable in the future (MP:RL-AL/KNAF). In the policy (MP:RL-AL/KNAN), the information set is I t = (R tt, C t ) and in the policy (MP:RL- AL/KNAF), the information set is I t = ((R tt ) t t, C t ) = (R t, C t ). If we include aircraft and requirements that are known now but actionable in the future, then decisions that involve these resources represent plans that may be changed in the future. We may present the subproblem for time t under policy π Π MP : X π t (R t ) = arg max x t X t C t (x t ) Figure 4 illustrates the rule-based and myopic cost-based policies. 4.3 Rolling horizon policy Our next step is to bring into play forecasted activities, which refer to information that is not known now. A rolling horizon policy considers not only all aircraft and requirements that are known now (and possibly actionable in the future), but also forecasts of what might become known, such as 21

24 a new requirement will be in the system in two days. In the rolling horizon policy, we use the information that might become available within the planning horizon: I t = {(R t t ) t t, C t t, t T ph t }. The structure of the decision function is typically the same as with a myopic policy (but with a larger set of resources). However, we generally require the information set I t to be treated deterministically for practical reasons. As a result, all forecasted information has to be modeled using point estimates. If we use more than one outcome, we would be solving sequences of stochastic programs. We may present the subproblem for time t under policy π Π RH : Xt π ((R t ) t T ph ) = arg max C tt (x tt ). t t T ph t 4.4 Approximate dynamic programming policy An approximate dynamic programming policy employs a value function approximation to evaluate the impact of decisions made in one time period on another time period. The value function evaluates the reward of the resources in certain state. We define: V t (R t ) = The value function of resource vector R t at time t. We are not able to compute the value function exactly, so we replace it with an approximation V t (R t ) which might be a linear approximation of the resource vector: V t (R t ) = v tt a R tt a t >t a To illustrate the use of the value function, consider the example of a requirement that has to move from England to Australia using either a C-17 or a C-5B. A problem is that the route from England to Australia involves passing through airbases that are not prepared to repair a C-5B if it breaks down, which might happen with a 10 to 20 percent probability. When a breakdown occurs, additional costs are incurred to complete the repair, and the aircraft is delayed, possibly producing a late delivery (and penalties for a late delivery). Furthermore, the amount of delay, and the cost of the breakdown, can depend on whether there are other comparable aircraft present at those airbases at the time. A cost-based policy might assign an aircraft to move this requirement based on the cost of getting the aircraft to the requirement. It is also possible to include the cost of moving the load, 22

25 but this gets more complicated when the additional costs of using the C-5B are purely related to random elements. A rule-based policy would tend to either ignore the issue, or include a rule that prevents completely the use of C-5B s on certain routes. We could resort to Monte Carlo sampling to represent the random events, but the decision of which aircraft to use needs to be made before we know whether the aircraft will break down (or the status at the intermediate airbases). Monte Carlo methods are useful for simulating the dynamics of the system, but are more limited when estimating the information that is not available when making a decision. Approximate dynamic programming computes an approximation of the expected value of, say, a C-5B in England, which would reflect both the costs and potential delays that might be incurred if the C-5B is used on requirements going to Australia. The values v tt a would be computed using observations of the value of a C-5B over multiple iterations. These values can use Monte Carlo samples of breakdowns, including both repair costs and penalties for late deliveries. The statistics v tt a represent an average over different samples, and as such approximate an expectation. A brief description of how the value function is estimated is given in the later part of this section. In policy (ADP), we represent the information set as I t = {(R tt ) t t, C t, V t }. In the remainder of this section, we provide a brief introduction to the computation of value function approximations for the military airlift problem. It is well known that problems of the form (8) can be solved via Bellman s optimality equations (see, for example, Puterman (1994)). The optimality equations are: V t (R t ) = max x t X t {C t (R t, x t ) + E [V t+1 (R t+1 ) R t ]} where X t is the feasible region for time period t. The challenge is in computing the value function V t (R t ). In this section, we briefly describe a strategy for approximately solving these large-scale dynamic programs in a computationally tractable way. The discussion here is complete, but brief. For a more detailed presentation, see Godfrey & Powell (2002) and Powell & Topaloglu (2003). Our algorithmic approach involves several strategies. We first begin by replacing the value function with a continuous approximation, which we denote by V t (R t ). This allows us to write: ˆV t (R t ) = max x t X t { Ct (R t, x t ) + E [ Vt+1 (R t+1 ) R t ]} (17) 23

The Optimizing-Simulator: An Illustration using the Military Airlift Problem

The Optimizing-Simulator: An Illustration using the Military Airlift Problem The Optimizing-Simulator: An Illustration using the Military Airlift Problem Tongqiang Tony Wu Warren B. Powell Princeton University and Alan Whisman Air Mobility Command There have been two primary modeling

More information

The Optimizing-Simulator: Merging Optimization and Simulation Using Approximate Dynamic Programming

The Optimizing-Simulator: Merging Optimization and Simulation Using Approximate Dynamic Programming The Optimizing-Simulator: Merging Optimization and Simulation Using Approximate Dynamic Programming Winter Simulation Conference December 5, 2005 Warren Powell CASTLE Laboratory Princeton University http://www.castlelab.princeton.edu

More information

Approximate Dynamic Programming for High Dimensional Resource Allocation Problems

Approximate Dynamic Programming for High Dimensional Resource Allocation Problems Approximate Dynamic Programming for High Dimensional Resource Allocation Problems Warren B. Powell Abraham George Belgacem Bouzaiene-Ayari Hugo P. Simao Department of Operations Research and Financial

More information

Approximate Dynamic Programming in Rail Operations

Approximate Dynamic Programming in Rail Operations Approximate Dynamic Programming in Rail Operations June, 2007 Tristan VI Phuket Island, Thailand Warren Powell Belgacem Bouzaiene-Ayari CASTLE Laboratory Princeton University http://www.castlelab.princeton.edu

More information

Dynamic Models for Freight Transportation

Dynamic Models for Freight Transportation Dynamic Models for Freight Transportation Warren B. Powell Belgacem Bouzaiene-Ayari Hugo P. Simao Department of Operations Research and Financial Engineering Princeton University August 13, 2003 Abstract

More information

Dynamic Models for Freight Transportation

Dynamic Models for Freight Transportation Dynamic Models for Freight Transportation Warren B. Powell Belgacem Bouzaiene-Ayari Hugo P. Simao Department of Operations Research and Financial Engineering Princeton University September 14, 2005 Abstract

More information

Stochastic Programming in Transportation and Logistics

Stochastic Programming in Transportation and Logistics Stochastic Programming in Transportation and Logistics Warren B. Powell and Huseyin Topaloglu Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544 Abstract

More information

Using Static Flow Patterns in Time-Staged Resource Allocation Problems

Using Static Flow Patterns in Time-Staged Resource Allocation Problems Using Static Flow Patterns in Time-Staged Resource Allocation Problems Arun Marar Warren B. Powell Hugo P. Simão Department of Operations Research and Financial Engineering, Princeton University, Princeton,

More information

Approximate Dynamic Programming in Transportation and Logistics: A Unified Framework

Approximate Dynamic Programming in Transportation and Logistics: A Unified Framework Approximate Dynamic Programming in Transportation and Logistics: A Unified Framework Warren B. Powell, Hugo P. Simao and Belgacem Bouzaiene-Ayari Department of Operations Research and Financial Engineering

More information

Approximate Dynamic Programming in Transportation and Logistics: A Unified Framework

Approximate Dynamic Programming in Transportation and Logistics: A Unified Framework Approximate Dynamic Programming in Transportation and Logistics: A Unified Framework Warren B. Powell, Hugo P. Simao and Belgacem Bouzaiene-Ayari Department of Operations Research and Financial Engineering

More information

There has been considerable recent interest in the dynamic vehicle routing problem, but the complexities of

There has been considerable recent interest in the dynamic vehicle routing problem, but the complexities of TRANSPORTATION SCIENCE Vol. 38, No. 4, November 2004, pp. 399 419 issn 0041-1655 eissn 1526-5447 04 3804 0399 informs doi 10.1287/trsc.1030.0073 2004 INFORMS The Dynamic Assignment Problem Michael Z. Spivey,

More information

Some Fixed-Point Results for the Dynamic Assignment Problem

Some Fixed-Point Results for the Dynamic Assignment Problem Some Fixed-Point Results for the Dynamic Assignment Problem Michael Z. Spivey Department of Mathematics and Computer Science Samford University, Birmingham, AL 35229 Warren B. Powell Department of Operations

More information

Perhaps one of the most widely used and poorly understood terms in dynamic programming is policy. A simple definition of a policy is:

Perhaps one of the most widely used and poorly understood terms in dynamic programming is policy. A simple definition of a policy is: CHAPTER 6 POLICIES Perhaps one of the most widely used and poorly understood terms in dynamic programming is policy. A simple definition of a policy is: Definition 6.0.1 A policy is a rule (or function)

More information

On-line supplement to: SMART: A Stochastic Multiscale Model for the Analysis of Energy Resources, Technology

On-line supplement to: SMART: A Stochastic Multiscale Model for the Analysis of Energy Resources, Technology On-line supplement to: SMART: A Stochastic Multiscale Model for e Analysis of Energy Resources, Technology and Policy This online supplement provides a more detailed version of e model, followed by derivations

More information

MODELING DYNAMIC PROGRAMS

MODELING DYNAMIC PROGRAMS CHAPTER 5 MODELING DYNAMIC PROGRAMS Perhaps one of the most important skills to develop in approximate dynamic programming is the ability to write down a model of the problem. Everyone who wants to solve

More information

The Dynamic Energy Resource Model

The Dynamic Energy Resource Model The Dynamic Energy Resource Model Group Peer Review Committee Lawrence Livermore National Laboratories July, 2007 Warren Powell Alan Lamont Jeffrey Stewart Abraham George 2007 Warren B. Powell, Princeton

More information

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell

More information

Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems

Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems Warren B. Powell Department of Operations Research and Financial Engineering Princeton University Benjamin Van Roy Departments

More information

Anticipatory Freight Selection in Intermodal Long-haul Round-trips

Anticipatory Freight Selection in Intermodal Long-haul Round-trips Anticipatory Freight Selection in Intermodal Long-haul Round-trips A.E. Pérez Rivera and M.R.K. Mes Department of Industrial Engineering and Business Information Systems, University of Twente, P.O. Box

More information

On the Approximate Linear Programming Approach for Network Revenue Management Problems

On the Approximate Linear Programming Approach for Network Revenue Management Problems On the Approximate Linear Programming Approach for Network Revenue Management Problems Chaoxu Tong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853,

More information

A Distributed Decision Making Structure for Dynamic Resource Allocation Using Nonlinear Functional Approximations

A Distributed Decision Making Structure for Dynamic Resource Allocation Using Nonlinear Functional Approximations A Distributed Decision Making Structure for Dynamic Resource Allocation Using Nonlinear Functional Approximations Huseyin Topaloglu School of Operations Research and Industrial Engineering Cornell University,

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION

LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION Name: LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION In general, a mathematical model is either deterministic or probabilistic. For example, the models and algorithms shown in the Graph-Optimization

More information

Air Force Integration of Weather In Mission Planning/Execution

Air Force Integration of Weather In Mission Planning/Execution Headquarters U.S. Air Force I n t e g r i t y - S e r v i c e - E x c e l l e n c e Air Force Integration of Weather In Mission Planning/Execution Friends and Partners in Aviation Meeting 13 July 2006

More information

We present a general optimization framework for locomotive models that captures different levels of detail,

We present a general optimization framework for locomotive models that captures different levels of detail, Articles in Advance, pp. 1 24 ISSN 0041-1655 (print) ISSN 1526-5447 (online) http://dx.doi.org/10.1287/trsc.2014.0536 2014 INFORMS From Single Commodity to Multiattribute Models for Locomotive Optimization:

More information

Complexity of Routing Problems with Release Dates and Deadlines

Complexity of Routing Problems with Release Dates and Deadlines Complexity of Routing Problems with Release Dates and Deadlines Alan Erera, Damian Reyes, and Martin Savelsbergh H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology

More information

Time Aggregation for Network Design to Meet Time-Constrained Demand

Time Aggregation for Network Design to Meet Time-Constrained Demand 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Time Aggregation for Network Design to Meet Time-Constrained Demand N. Boland

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

A New Dynamic Programming Decomposition Method for the Network Revenue Management Problem with Customer Choice Behavior

A New Dynamic Programming Decomposition Method for the Network Revenue Management Problem with Customer Choice Behavior A New Dynamic Programming Decomposition Method for the Network Revenue Management Problem with Customer Choice Behavior Sumit Kunnumkal Indian School of Business, Gachibowli, Hyderabad, 500032, India sumit

More information

Facility Location and Distribution System Planning. Thomas L. Magnanti

Facility Location and Distribution System Planning. Thomas L. Magnanti Facility Location and Distribution System Planning Thomas L. Magnanti Today s Agenda Why study facility location? Issues to be modeled Basic models Fixed charge problems Core uncapacitated and capacitated

More information

Integrated Electricity Demand and Price Forecasting

Integrated Electricity Demand and Price Forecasting Integrated Electricity Demand and Price Forecasting Create and Evaluate Forecasting Models The many interrelated factors which influence demand for electricity cannot be directly modeled by closed-form

More information

A Parallelizable and Approximate Dynamic Programming-Based Dynamic Fleet Management Model with Random Travel Times and Multiple Vehicle Types

A Parallelizable and Approximate Dynamic Programming-Based Dynamic Fleet Management Model with Random Travel Times and Multiple Vehicle Types A Parallelizable and Approximate Dynamic Programming-Based Dynamic Fleet Management Model with Random Travel Times and Multiple Vehicle Types Huseyin Topaloglu School of Operations Research and Industrial

More information

Regularized optimization techniques for multistage stochastic programming

Regularized optimization techniques for multistage stochastic programming Regularized optimization techniques for multistage stochastic programming Felipe Beltrán 1, Welington de Oliveira 2, Guilherme Fredo 1, Erlon Finardi 1 1 UFSC/LabPlan Universidade Federal de Santa Catarina

More information

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE

MULTIPLE CHOICE QUESTIONS DECISION SCIENCE MULTIPLE CHOICE QUESTIONS DECISION SCIENCE 1. Decision Science approach is a. Multi-disciplinary b. Scientific c. Intuitive 2. For analyzing a problem, decision-makers should study a. Its qualitative aspects

More information

BetaZi SCIENCE AN OVERVIEW OF PHYSIO-STATISTICS FOR PRODUCTION FORECASTING. EVOLVE, ALWAYS That s geologic. betazi.com. geologic.

BetaZi SCIENCE AN OVERVIEW OF PHYSIO-STATISTICS FOR PRODUCTION FORECASTING. EVOLVE, ALWAYS That s geologic. betazi.com. geologic. BetaZi SCIENCE AN OVERVIEW OF PHYSIO-STATISTICS FOR PRODUCTION FORECASTING EVOLVE, ALWAYS That s geologic betazi.com geologic.com Introduction Predictive Analytics is the science of using past facts to

More information

VOYAGE (PASSAGE) PLANNING

VOYAGE (PASSAGE) PLANNING VOYAGE (PASSAGE) PLANNING Introduction O Passage planning or voyage planning is a procedure of developing a complete description of a vessel's voyage from start to finish. O Production of a passage plan

More information

Introduction. Spatial Multi-Agent Systems. The Need for a Theory

Introduction. Spatial Multi-Agent Systems. The Need for a Theory Introduction Spatial Multi-Agent Systems A spatial multi-agent system is a decentralized system composed of numerous identically programmed agents that either form or are embedded in a geometric space.

More information

Vehicle Routing and Scheduling. Martin Savelsbergh The Logistics Institute Georgia Institute of Technology

Vehicle Routing and Scheduling. Martin Savelsbergh The Logistics Institute Georgia Institute of Technology Vehicle Routing and Scheduling Martin Savelsbergh The Logistics Institute Georgia Institute of Technology Vehicle Routing and Scheduling Part II: Algorithmic Enhancements Handling Practical Complexities

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Technical Description for Aventus NowCast & AIR Service Line

Technical Description for Aventus NowCast & AIR Service Line Technical Description for Aventus NowCast & AIR Service Line This document has been assembled as a question and answer sheet for the technical aspects of the Aventus NowCast Service. It is designed to

More information

Approximate dynamic programming in transportation and logistics: a unified framework

Approximate dynamic programming in transportation and logistics: a unified framework EURO J Transp Logist (2012) 1:237 284 DOI 10.1007/s13676-012-0015-8 TUTORIAL Approximate dynamic programming in transportation and logistics: a unified framework Warren B. Powell Hugo P. Simao Belgacem

More information

Stochastic prediction of train delays with dynamic Bayesian networks. Author(s): Kecman, Pavle; Corman, Francesco; Peterson, Anders; Joborn, Martin

Stochastic prediction of train delays with dynamic Bayesian networks. Author(s): Kecman, Pavle; Corman, Francesco; Peterson, Anders; Joborn, Martin Research Collection Other Conference Item Stochastic prediction of train delays with dynamic Bayesian networks Author(s): Kecman, Pavle; Corman, Francesco; Peterson, Anders; Joborn, Martin Publication

More information

What you should know about approximate dynamic programming

What you should know about approximate dynamic programming What you should know about approximate dynamic programming Warren B. Powell Department of Operations Research and Financial Engineering Princeton University, Princeton, NJ 08544 December 16, 2008 Abstract

More information

Estimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS

Estimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS MURI Meeting June 20, 2001 Estimation and Optimization: Gaps and Bridges Laurent El Ghaoui EECS UC Berkeley 1 goals currently, estimation (of model parameters) and optimization (of decision variables)

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Introduction to Queueing Theory with Applications to Air Transportation Systems

Introduction to Queueing Theory with Applications to Air Transportation Systems Introduction to Queueing Theory with Applications to Air Transportation Systems John Shortle George Mason University February 28, 2018 Outline Why stochastic models matter M/M/1 queue Little s law Priority

More information

Page No. (and line no. if applicable):

Page No. (and line no. if applicable): COALITION/IEC (DAYMARK LOAD) - 1 COALITION/IEC (DAYMARK LOAD) 1 Tab and Daymark Load Forecast Page No. Page 3 Appendix: Review (and line no. if applicable): Topic: Price elasticity Sub Topic: Issue: Accuracy

More information

A Mixed Integer Linear Program for Optimizing the Utilization of Locomotives with Maintenance Constraints

A Mixed Integer Linear Program for Optimizing the Utilization of Locomotives with Maintenance Constraints A Mixed Integer Linear Program for with Maintenance Constraints Sarah Frisch Philipp Hungerländer Anna Jellen Dominic Weinberger September 10, 2018 Abstract In this paper we investigate the Locomotive

More information

Uncertain Programming Model for Solid Transportation Problem

Uncertain Programming Model for Solid Transportation Problem INFORMATION Volume 15, Number 12, pp.342-348 ISSN 1343-45 c 212 International Information Institute Uncertain Programming Model for Solid Transportation Problem Qing Cui 1, Yuhong Sheng 2 1. School of

More information

Approximate Dynamic Programming: Solving the curses of dimensionality

Approximate Dynamic Programming: Solving the curses of dimensionality Approximate Dynamic Programming: Solving the curses of dimensionality Informs Computing Society Tutorial October, 2008 Warren Powell CASTLE Laboratory Princeton University http://www.castlelab.princeton.edu

More information

Slides 9: Queuing Models

Slides 9: Queuing Models Slides 9: Queuing Models Purpose Simulation is often used in the analysis of queuing models. A simple but typical queuing model is: Queuing models provide the analyst with a powerful tool for designing

More information

R O B U S T E N E R G Y M AN AG E M E N T S Y S T E M F O R I S O L AT E D M I C R O G R I D S

R O B U S T E N E R G Y M AN AG E M E N T S Y S T E M F O R I S O L AT E D M I C R O G R I D S ROBUST ENERGY MANAGEMENT SYSTEM FOR ISOLATED MICROGRIDS Jose Daniel La r a Claudio Cañizares Ka nka r Bhattacharya D e p a r t m e n t o f E l e c t r i c a l a n d C o m p u t e r E n g i n e e r i n

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t

More information

Inventory optimization of distribution networks with discrete-event processes by vendor-managed policies

Inventory optimization of distribution networks with discrete-event processes by vendor-managed policies Inventory optimization of distribution networks with discrete-event processes by vendor-managed policies Simona Sacone and Silvia Siri Department of Communications, Computer and Systems Science University

More information

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY Terry Rockafellar University of Washington, Seattle AMSI Optimise Melbourne, Australia 18 Jun 2018 Decisions in the Face of Uncertain Outcomes = especially

More information

Mitigating end-effects in Production Scheduling

Mitigating end-effects in Production Scheduling Mitigating end-effects in Production Scheduling Bachelor Thesis Econometrie en Operationele Research Ivan Olthuis 359299 Supervisor: Dr. Wilco van den Heuvel June 30, 2014 Abstract In this report, a solution

More information

A Representational Paradigm for Dynamic Resource Transformation Problems

A Representational Paradigm for Dynamic Resource Transformation Problems A Representational Paradigm for Dynamic Resource Transformation Problems Warren B. Powell, Joel A. Shapiro and Hugo P. Simao January, 2003 Department of Operations Research and Financial Engineering, Princeton

More information

CS 6901 (Applied Algorithms) Lecture 3

CS 6901 (Applied Algorithms) Lecture 3 CS 6901 (Applied Algorithms) Lecture 3 Antonina Kolokolova September 16, 2014 1 Representative problems: brief overview In this lecture we will look at several problems which, although look somewhat similar

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

A Unified Framework for Defining and Measuring Flexibility in Power System

A Unified Framework for Defining and Measuring Flexibility in Power System J A N 1 1, 2 0 1 6, A Unified Framework for Defining and Measuring Flexibility in Power System Optimization and Equilibrium in Energy Economics Workshop Jinye Zhao, Tongxin Zheng, Eugene Litvinov Outline

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Artificial Intelligence & Sequential Decision Problems

Artificial Intelligence & Sequential Decision Problems Artificial Intelligence & Sequential Decision Problems (CIV6540 - Machine Learning for Civil Engineers) Professor: James-A. Goulet Département des génies civil, géologique et des mines Chapter 15 Goulet

More information

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Chapter 6 Queueing Models Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing

More information

Modelling, Simulation & Computing Laboratory (msclab) Faculty of Engineering, Universiti Malaysia Sabah, Malaysia

Modelling, Simulation & Computing Laboratory (msclab) Faculty of Engineering, Universiti Malaysia Sabah, Malaysia 1.0 Introduction Intelligent Transportation Systems (ITS) Long term congestion solutions Advanced technologies Facilitate complex transportation systems Dynamic Modelling of transportation (on-road traffic):

More information

On the Optimality of Myopic Sensing. in Multi-channel Opportunistic Access: the Case of Sensing Multiple Channels

On the Optimality of Myopic Sensing. in Multi-channel Opportunistic Access: the Case of Sensing Multiple Channels On the Optimality of Myopic Sensing 1 in Multi-channel Opportunistic Access: the Case of Sensing Multiple Channels Kehao Wang, Lin Chen arxiv:1103.1784v1 [cs.it] 9 Mar 2011 Abstract Recent works ([1],

More information

Technical Description for Aventus NowCast & AIR Service Line

Technical Description for Aventus NowCast & AIR Service Line Technical Description for Aventus NowCast & AIR Service Line This document has been assembled as a question and answer sheet for the technical aspects of the Aventus NowCast Service. It is designed to

More information

. Introduction to CPM / PERT Techniques. Applications of CPM / PERT. Basic Steps in PERT / CPM. Frame work of PERT/CPM. Network Diagram Representation. Rules for Drawing Network Diagrams. Common Errors

More information

Applications of Linear Programming - Minimization

Applications of Linear Programming - Minimization Applications of Linear Programming - Minimization Drs. Antonio A. Trani and H. Baik Professor of Civil Engineering Virginia Tech Analysis of Air Transportation Systems June 9-12, 2010 1 of 49 Recall the

More information

Complexity Metrics. ICRAT Tutorial on Airborne self separation in air transportation Budapest, Hungary June 1, 2010.

Complexity Metrics. ICRAT Tutorial on Airborne self separation in air transportation Budapest, Hungary June 1, 2010. Complexity Metrics ICRAT Tutorial on Airborne self separation in air transportation Budapest, Hungary June 1, 2010 Outline Introduction and motivation The notion of air traffic complexity Relevant characteristics

More information

0.1 Motivating example: weighted majority algorithm

0.1 Motivating example: weighted majority algorithm princeton univ. F 16 cos 521: Advanced Algorithm Design Lecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm Lecturer: Sanjeev Arora Scribe: Sanjeev Arora (Today s notes

More information

1 Modelling and Simulation

1 Modelling and Simulation 1 Modelling and Simulation 1.1 Introduction This course teaches various aspects of computer-aided modelling for the performance evaluation of computer systems and communication networks. The performance

More information

The Optimal Resource Allocation in Stochastic Activity Networks via The Electromagnetism Approach

The Optimal Resource Allocation in Stochastic Activity Networks via The Electromagnetism Approach The Optimal Resource Allocation in Stochastic Activity Networks via The Electromagnetism Approach AnabelaP.Tereso,M.MadalenaT.Araújo Universidade do Minho, 4800-058 Guimarães PORTUGAL anabelat@dps.uminho.pt;

More information

Lecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm. Lecturer: Sanjeev Arora

Lecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm. Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 8: Decision-making under total uncertainty: the multiplicative weight algorithm Lecturer: Sanjeev Arora Scribe: (Today s notes below are

More information

A Column Generation Based Heuristic for the Dial-A-Ride Problem

A Column Generation Based Heuristic for the Dial-A-Ride Problem A Column Generation Based Heuristic for the Dial-A-Ride Problem Nastaran Rahmani 1, Boris Detienne 2,3, Ruslan Sadykov 3,2, François Vanderbeck 2,3 1 Kedge Business School, 680 Cours de la Libération,

More information

Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem

Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem Sara Lumbreras & Andrés Ramos July 2013 Agenda Motivation improvement

More information

Homework Assignment 4 Root Finding Algorithms The Maximum Velocity of a Car Due: Friday, February 19, 2010 at 12noon

Homework Assignment 4 Root Finding Algorithms The Maximum Velocity of a Car Due: Friday, February 19, 2010 at 12noon ME 2016 A Spring Semester 2010 Computing Techniques 3-0-3 Homework Assignment 4 Root Finding Algorithms The Maximum Velocity of a Car Due: Friday, February 19, 2010 at 12noon Description and Outcomes In

More information

JOINT PRICING AND PRODUCTION PLANNING FOR FIXED PRICED MULTIPLE PRODUCTS WITH BACKORDERS. Lou Caccetta and Elham Mardaneh

JOINT PRICING AND PRODUCTION PLANNING FOR FIXED PRICED MULTIPLE PRODUCTS WITH BACKORDERS. Lou Caccetta and Elham Mardaneh JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.2010.6.123 MANAGEMENT OPTIMIZATION Volume 6, Number 1, February 2010 pp. 123 147 JOINT PRICING AND PRODUCTION PLANNING FOR FIXED PRICED MULTIPLE PRODUCTS WITH

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

Operational Art: The Bridge Between Deterministic Combat Analysis and the actual Complex Adaptive System known as War

Operational Art: The Bridge Between Deterministic Combat Analysis and the actual Complex Adaptive System known as War Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 2004-11 Operational Art: The Bridge Between Deterministic Combat Analysis and the actual Complex

More information

TRUCK DISPATCHING AND FIXED DRIVER REST LOCATIONS

TRUCK DISPATCHING AND FIXED DRIVER REST LOCATIONS TRUCK DISPATCHING AND FIXED DRIVER REST LOCATIONS A Thesis Presented to The Academic Faculty by Steven M. Morris In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in Industrial

More information

Water Resources Systems Prof. P. P. Mujumdar Department of Civil Engineering Indian Institute of Science, Bangalore

Water Resources Systems Prof. P. P. Mujumdar Department of Civil Engineering Indian Institute of Science, Bangalore Water Resources Systems Prof. P. P. Mujumdar Department of Civil Engineering Indian Institute of Science, Bangalore Module No. # 05 Lecture No. # 22 Reservoir Capacity using Linear Programming (2) Good

More information

An Operational Evaluation of the Ground Delay Program (GDP) Parameters Selection Model (GPSM)

An Operational Evaluation of the Ground Delay Program (GDP) Parameters Selection Model (GPSM) An Operational Evaluation of the Ground Delay Program (GDP) Parameters Selection Model (GPSM) Lara Shisler, Christopher Provan Mosaic ATM Dave Clark MIT Lincoln Lab William Chan, Shon Grabbe NASA Ames

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

SCHEDULING PROBLEMS FOR FRACTIONAL AIRLINES

SCHEDULING PROBLEMS FOR FRACTIONAL AIRLINES SCHEDULING PROBLEMS FOR FRACTIONAL AIRLINES A Thesis Presented to The Academic Faculty by Fei Qian In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the H. Milton Stewart

More information

Robust Optimization in a Queueing Context

Robust Optimization in a Queueing Context Robust Optimization in a Queueing Context W. van der Heide September 2015 - February 2016 Publishing date: April 18, 2016 Eindhoven University of Technology Department of Mathematics & Computer Science

More information

Partially Observable Markov Decision Processes (POMDPs)

Partially Observable Markov Decision Processes (POMDPs) Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions

More information

Probabilistic TFM: Preliminary Benefits Analysis of an Incremental Solution Approach

Probabilistic TFM: Preliminary Benefits Analysis of an Incremental Solution Approach Probabilistic TFM: Preliminary Benefits Analysis of an Incremental Solution Approach James DeArmon, Craig Wanke, Daniel Greenbaum MITRE/CAASD 7525 Colshire Dr McLean VA 22102 Introduction The National

More information

Basics of Uncertainty Analysis

Basics of Uncertainty Analysis Basics of Uncertainty Analysis Chapter Six Basics of Uncertainty Analysis 6.1 Introduction As shown in Fig. 6.1, analysis models are used to predict the performances or behaviors of a product under design.

More information

Discrete-event simulations

Discrete-event simulations Discrete-event simulations Lecturer: Dmitri A. Moltchanov E-mail: moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Why do we need simulations? Step-by-step simulations; Classifications;

More information

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 18, 2015

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 18, 2015 MFM Practitioner Module: Risk & Asset Allocation February 18, 2015 No introduction to portfolio optimization would be complete without acknowledging the significant contribution of the Markowitz mean-variance

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples

Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 1: Introduction to Dynamic Programming Instructor: Shiqian Ma January 6, 2014 Suggested Reading: Sections 1.1 1.5 of Chapter

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION

DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION Dynamic Systems and Applications 26 (2017) 575-588 DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION SHI AN WANG AND N. U. AHMED School of Electrical Engineering and Computer

More information

Chapter 7. Logical Complexity

Chapter 7. Logical Complexity Chapter 7 At the heart of the definition of complexity is the difficulty of making generalizations regarding the relationship between any two variables out of the context of all related variables. However,

More information

CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming

CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming Integer Programming, Goal Programming, and Nonlinear Programming CHAPTER 11 253 CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming TRUE/FALSE 11.1 If conditions require that all

More information

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016 Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the

More information

Neural-wavelet Methodology for Load Forecasting

Neural-wavelet Methodology for Load Forecasting Journal of Intelligent and Robotic Systems 31: 149 157, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 149 Neural-wavelet Methodology for Load Forecasting RONG GAO and LEFTERI H. TSOUKALAS

More information

1 [15 points] Frequent Itemsets Generation With Map-Reduce

1 [15 points] Frequent Itemsets Generation With Map-Reduce Data Mining Learning from Large Data Sets Final Exam Date: 15 August 2013 Time limit: 120 minutes Number of pages: 11 Maximum score: 100 points You can use the back of the pages if you run out of space.

More information