INFORMS ANNUAL MEETING WASHINGTON D.C. 2008

Size: px
Start display at page:

Download "INFORMS ANNUAL MEETING WASHINGTON D.C. 2008"

Transcription

1 Sensor Information Monotonicity in Disambiguation Protocols Xugang Ye Department of Applied Mathematics and Statistics, The Johns Hopkins University

2 Stochastic ti Ordering Comparing Two Random Numbers: A, B Three measures: By Expectation: A E B if E(A) E(B) ~ E( ) denotes expectation By Quantile: A Q,,p B if Q(A, p) Q(B, p) ~ Q(X, p) = inf {x: F X (x) p} denotes p - quantile By Distribution: A D B if F A (x) F B (x) for all x ~ F(x) = P(X x) denotes distribution function The third measure is usually called stochastic ordering. Some other notations include A STO B and A p B. If strict inequality is also true for some x, then the ordering is also called strict, i.e. A < STO B or A p B.

3 Stochastic ti Ordering Properties: A D B => E(A) E(B) A D B => A Q,p B for any p 1 E[f(A)] E[f(B)] for all non-decreasing function f => A D B A Q,p B for any p 1=>A A D B A C = c D B C = c for any c => A D B A D B and f is non-decreasing function => f(a) D f(b) Remarks: Ordering by expectation is the most stable, but there might exist tiny possibility of infinite expectation. Ordering by quantile or distribution function may overcome the infinite expectation problem, but they are more difficult to implement than the expectation measure since much more samples are needed to estimate the distribution function. Ordering by distribution function is the strongest.

4 Stochastic ti Ordering Ideal Picture vs. Real World System Input: X Prior Information: X Function f Strategy (policy) New Information Output: Y = f(x) Decisions Cost: Y Given X 1 STO X 2, compare Y 1 and Y 2 Given X 1 is stochastically better than X 2, compare Y 1 and Y 2

5 Random Disambiguation Paths The Simplest Example s a: l e: l, ρ, c, X l : cost of going from s to t by deterministic arc a l: cost of going from s to t by nondeterministic arc e if e is traversable ρ : probability that e is not traversable c: cost of disambiguating e X: indicator of e, that is, P(X = 1 ) = ρ and P(X = ) = 1 ρ t There are two options: one is to go from s to t by a (certain way); the other is to disambiguate e, with a cost c, at first, then make further decision according to the disambiguation result. After disambiguation, If X =, then it s wise to choose between a and e the one with smaller traveling cost; If X = 1, then there is only one choice left, that is, to go from s to t by a. Question: What is the best policy to go from s to t?

6 Random Disambiguation Paths Cost Analysis Option 1: the cost is C 1 = l Option 2: the cost is c + l if X = ; C 2 = c + l if X = 1 C 2 is random with mean E(C 2 ) = ρ(c + l ) + (1 ρ)(c + l) Consider E(C 1 ) E(C 2 ) = l ρ(c + l ) (1 ρ)(c + l) =(1 ρ)l c (1 ρ)l <Ponder on this!> = (1 ρ)(l (l + c/(1 ρ))) = (1 ρ)((l + /(1 ρ)) (l + c/(1 ρ))) A discovery: E(C 1 ) E(C 2 ) if and only if l + /(1 ρ) l + c/(1 ρ) A conclusion: choose Option 1 if l + /(1 ρ) l + c/(1 ρ); choose Option 2 otherwise. This gives the optimal strategy under the expectation measure.

7 Random Disambiguation Paths More Complicated Scenario s a: l e 1 : l 1, ρ 1, c 1, X 1... e m : l m, ρ m, c m, X m l : cost of going from s to t by deterministic arc a l i : cost of going from s to t by nondeterministic arc e i if e i is traversable ρ i : probability that e i is not traversable c i : cost of disambiguating e i X i : indicator of e i, that is, P(X i =1)=ρ = ρ i and P(X i = ) = 1 ρ i t Assume independency. There are too many options. There are (m + 1)! distinct policies with each one denoted as a permutation of a, e 1, e 2,, e m (these policies form a class called balk-free class). There are also many policies outside the balkfree class. Question: What is the best policy to go from s to t?

8 Random Disambiguation Paths m = 2 The dynamic programming search tree for finding the optimal policy for traversing the parallel graph in which there is a deterministic arc a and two nondeterministic arcs e 1, e 2. ρ 1 = ρ(e 1 ), ρ 2 = ρ(e 2 ), l = l(a), l 1 = l(e 1 ), l 2 = l(e 2 ), c 1 = c(e 1 ), c 2 = c(e 2 ). The root of the tree is the problem of evaluating E*({e 1, e 2 } a), which is recursively reduced into subproblems via conditioning

9 Random Disambiguation Paths An Optimal Policy: Sort l + /(1 ρ), l 1 + c 1 /(1 ρ 1 ),, l m + c m /(1 ρ m ) in ascending order, the corresponding ordered list a 1, a 2,, a m+1, as a permutation of a, e 1,, e m, defines an optimal policy under the expectation measure. To execute the policy, check a 1, if it is deterministic, then traverse it; otherwise, disambiguate it. If the disambiguation result tells that it is traversable, then traverse it; otherwise, check a 2, continue this process until reaching t. Generally speaking, the policy is to, in a dynamic manner, solve shortest path problems. Proof of the optimality under the expectation measure and the further theoretical development are tricky, but It inspires a motivating heuristic method for more general settings and real-world applications.

10 Random Disambiguation Paths Back to The Simplest Example Suppose ρ is unknown, but s a: l e: l, ρ, c, X, Y l : cost of going from s to t by deterministic arc a l: cost of going from s to t by nondeterministic arc e if e is traversable ρ : probability that e is not traversable c: cost of disambiguating e X: indicator of e, that is, P(X = 1 ) = ρ and P(X = ) = 1 ρ t There is another observable random variable Y (,1) such that P(Y y X = ) = F (y) P(Y y X = 1) = F 1 (y) (usually assume continuous distributions) Suppose l > l + c (consider nontrivial case) Policy: compare l with l + c/(1 Y). If l < l + c/(1 Y), then traverse a; otherwise, disambiguate e. If disambiguation result shows X =, then traverse e; otherwise, traverse a. The cost C of going from s to t is c+ l if l > l+ c/(1 Y), X = C = l if l l + c /(1 Y ) c + l if l > l + c/(1 Y), X = 1

11 Random Disambiguation Paths Let α = (l l c ) / (l l) we can rewrite the cost function as c + l if Y < α, X = C = l if Y α c + l if Y < α, X = 1 We can compute (via conditioning) P(Y < α, X = ) = P(Y < α X = )P(X = ) = (1 ρ)f (α) P(Y < α, X = 1) = P(Y < α X = 1)P(X = 1) = ρf 1 (α) P(Y α) = P(Y α X = )P(X = ) +P(Y α X = 1)P(X = 1) = (1 ρ)[1 F (α)] + ρ[1 F 1 (α)] The cost distribution function is F C if x< c+ l (1 ρ) F ( α) if c + l x < l ( x) = 1 ρ F ( α ) if l x < c + l 1 if c+ l x 1 Ponder on this!

12 Stochastic ti Ordering Prior Information: Y Strategy (policy) Decisions Cost: C Y X = ~ F Y X = 1 ~ F 1 New Informat tion: X P(X = 1) = ρ P(X = ) = 1 ρ Consider Y (1) : Y (1) X = ~ F (1) Y (1) X = 1 ~ F 1 (1) Y (2) : Y (2) X = ~ F (2) Y (2) X = 1 ~ F (2) 1 Suppose F (1) (y) F (2) (y) F (1) 1 (y) F (2) 1 (y) ~ that is, Y (1) is stochastically at least as good as Y (2), or prior information is stochastically ordered. Implication: F (1) ( x ) F (2) ( x ) for C any x >, hence C (1) (2) D C,ie i.e. the random costs are also stochastically ordered. It s also straightforward that E[C (1) ] E[C (2) ] and C (1) Q,p C (2) C Here comes an important concept: sensor monotonicity

13 AR Real lw World lda Application Random Disambiguation Paths in US Navy

14 Background Costal Battlefield Reconnaissance and Analysis (COBRA) Allows naval expeditionary forces to conduct airborne, standoff reconnaissance and automatic detection of minefields in costal area Allows marine corps to successfully conduct quick Ship-to- Objective Maneuver in face of mine threats without personnel casualties and equipment losses

15 Background COBRA system consists of three primary components Airborne Payload Sensor Unit Benefit/Cost Ratio Analysis Tactical lcontrol lsoftware Processing Station Navigation Unit Navigation Algorithm

16 Problem Description Overall Problem Navigate a combat unit safely and swiftly through a costal environment with mine threats and to reach a preferable target location Features Decision under uncertainty Probabilistic prior information Disambiguation capability Dynamic Learning

17 Problem Decomposition Terrain Modeling Minefield model Mark information Graph generation Dynamic Shortest Path Problem Search algorithm Replanning

18 Minefield Model Simulated Risk Centers and Disks t Simulate detections d i, i = 1, 2,, m via spatial ilpoint process True detection: X i = 1 False detection: X i = i 6 y 5 4 D i 3 d i 2 1 s x Adi disambiguation i of fdd i at a cost c i > happens when the agent is right outside D i but about to enter D i and d i has not been disambiguated

19 Mark Information False mine Y i X i = ~ F f < Y i < True mine Y i X i = 1 ~ F f

20 Graph Generation Marker of arc a = (u, v): Y I \ I Y + (a) =, v u where I u = {i u is covered by D i } and I v = {i v is covered by D i } Knowledge of true-false status: if X i =; Y i+ = 1 if X i =1; Y i Marker of intersection: if d i has not been disambiguated Y I = 1 - (1 Y + ), where I {1, 2,, m} i I i Extended length function : l(a) if Y + (a) <1; l + (a) = + if Y + (a) =1 Extended disambiguation cost function : ci if < Y + (a) <1; c + i [ I( v)\ I( u)] Id (a)= otherwise, where I d = {i d i has not been disambiguated} CR weight function: + c ( a) W CR,Y (a)=l l + (a) Y ( a)

21 Dynamic Shortest t Path Problem Under the knowledge (Y +, l +, c + ) of fthe terrain, find a shortest t path relative to W CR, Y from its current location to the target location t, let the agent follow the shortest path plan until the agent reaches t or encounters a nondeterministic arc. In the former case, the navigation process successfully completes; in the later case, the agent disambiguates the arc by disambiguating all the newly encountered risk disks. The disambiguation results update the knowledge (Y +, l +, c + ) and a new shortest path from agent s current location to the target location t relative to updated W CR,Y is found for the agent to follow.

22 Search Algorithm A* algorithm One of the greatest achievement in Artificial Intelligence (AI) It employs a best-first search strategy It finds a shortest path as long as there exists one It uses heuristic information to reduce the search tree It can be derived from the primal-dual algorithm for general LP It is practically proved very efficient g(v) v h(v) s Closed list t Open list

23 The A* Algorithm Graph/Network: A directed graph G = (V, A, W, δ, b) V is the set of nodes. A is the set of arcs. W: A R is the weight function. s V is a specified starting node. t V is a specified target node. δ > is a constant such that δ W (a) < + for any a A. b > is a constant integer such that {v (u, v) A or (v, u) A} b for all u V. There exists a heuristic function h: V R such that h(v) for all v V, h(t) ) =, and W(u, v) ) + h(v) ) h(u) )for all ll( (u, v) ) A. ~ Consistent heuristic Denote dist(u, v) as the distance (length of the shortest path) from u to v.

24 The A* Algorithm Notations: h: heuristic O: Open list E: Closed list d: distance label f: node selection key pred: predecessor Steps: Given G, s, t, and h Step 1. Set O = {s}, d(s) =, and E = φ. Step 2. If O = φ and t E, then stop (there is no s-t path); otherwise, continue. Step 3. Find u = argmin v O f(v) = d(v) + h(v). Set O = O \ {u} and E = E {u}. If t E, then stop (a shortest s-t path is found); otherwise, continue. Step 4. For each node v V such that (u, v) A and v E, if v O, then set O = O {v}, { } d(v) ) = d(u) )+W( W(u, v), ) and pred(v) ) = u; otherwise, if d(v) > d(u) + W(u, v), then set d(v) = d(u) + W(u, v) and pred(v) = u. Go to Step 2.

25 Sensor and Sensor Ordering Sensor Notation: ti S = (F, F 1 ) Valid sensor: F (y) F 1 (y) for any y 1 Discerning i sensor: F (.5) >.5 > F 1 (.5) Y i ~ S < = > Y i X i = ~ F Y i X i = 1 ~ F 1 Beta sensor: F = Beta (3.5 - λ, λ) F 1 = Beta (3.5 + λ, λ) Sensor Ordering: Notation: ti S (1) = (F (1), F (1) 1 )i is said idto be at least as good as S (2) = (F (2), F (2) 1 ) if F (1) (y) F (2) (y) and F (1) 1 (y) F (2) 1 (y) for any y 1 f beta (x;a,b) f beta (x;1.5,5.5) Probability Density Function: Beta f beta (x;2.5,4.5) f beta (x;4.5,2.5) f beta (x;5.5,1.5) x

26 Simulation y 5 y x x A realization of trajectory in a real terrain (left) and in one of its marked map (right) Sensor parameter λ =.5. Total cost: ; traveling cost: ; and disambiguation cost: 27. There are totally 12 disambiguations. Total simulation run time in a PC with Pentium 4 CPU and 1G RAM: seconds.

27 Simulation y 5 y x x A realization of another trajectory in the same real terrain (left) and in one of its marked map (right). Sensor parameter λ = 3.. Total cost: ; traveling cost: 168.2; and disambiguation cost: There are totally 3 disambiguations. Total simulation run time in a PC with Pentium 4 CPU and 1G RAM: seconds.

28 Statistical Analysis: Conditional Experiments Average Cost vs. Sensor Parameter Empirical CDFs 26 Probability Length λ =.1 22 λ =.5 λ = λ = λ = λ = 2.5 λ = λ = Average Co ost λ Graphic statistical ti ti results of the data from the experiments conditioning i on terrain T 1. For each i = 1, 2,, 8, the sample size under λ i is 4. Left: plot of ECDFs; Right: error bar plot of average cost vs. sensor parameter Kolmogorov-Smirnov tests t for comparing sample distributions ib ti t tests for comparing sample means

29 1.8 Statistical Analysis: Unconditional Experiments Average Cost vs. Sensor Parameter Empirical CDFs y Probability λ =.1 λ =.5 λ = 1. λ = 1.5 λ = 2. λ = 2.5 λ = 3. λ = Length Average Co ost λ Graphic statistical ti ti results of the data from the unconditional experiments. For each i = 1, 2,, 8, the sample size under λ i is 25. Left: plot of ECDFs; Right: error bar plot of average cost vs. sensor parameter Kolmogorov-Smirnov tests t for comparing sample distributions ib ti t tests for comparing sample means

30 COBRA Data y COBRA Terrain x Projection y Projected COBRA Terrain x

31 COBRA Data y s C d = 5 λ = x t y C d = 5 λ = x Trajectory simulation under original markers (λ = ), the total cost is There is no disambiguation; Total simulation run time in a PC with Pentium 4 CPU and 1G RAM is seconds 8 9 s t

32 COBRA Data y s C d = 5 λ = x t y C d = 5 λ = x Trajectory simulation under improved markers (λ =.4). The total cost is with one disambiguation. Total simulation run time in a PC with Pentium 4 CPU and 1G RAM is seconds s t

33 COBRA Data 98 Total Cost vs. Improvement Parameter 97 Marker improvement scheme Y i = λ +(1 λ) Y i if X i =1; (1 λ) Y Y i if X i = Total Cost λ Plot of total cost vs. improvement parameter for COBRA runs. The mesh of values of λ are.1i, i = 1, 2,, 1. Starting location: s = ( 3, 25); target location t = (3, 6); and disambiguation cost per disk is C d =5 5.

34 Deterministic Shortest Path Cost Average Average Cost vs. Sensor Parameter Average costs of nondeterministic traversals Average length of deterministic shortest paths Critical sensor parameter Percen nt Histogram With Fit 3-Parameter Lognormal 3 35 Data 4 45 Variable lambda =.1 lambda =.5 lambda = 1. lambda = 1.5 lamba = 2. lambda = 2.5 lambda = 3. lambda = 3.49 Loc Scale Thresh N λ Histogram Deterministic shortest paths vs. nondeterministic traversals. Unconditional experiments. The critical parameter of the Beta sensor is λ* = Percent c_d = inf

35 Thanks! question?

A Note on the Connection between the Primal-Dual and the A* Algorithm

A Note on the Connection between the Primal-Dual and the A* Algorithm A Note on the Connection between the Primal-Dual and the A* Algorithm Xugang Ye, Johns Hopkins University, USA Shih-Ping Han, Johns Hopkins University, USA Anhua Lin, Middle Tennessee State University,

More information

a Note on the Connection Between the Primal-Dual and the a* algorithm

a Note on the Connection Between the Primal-Dual and the a* algorithm International Journal of Operations Research and Information Systems, 1(1), 73-85, January-March 2010 73 a Note on the Connection Between the Primal-Dual and the a* algorithm Xugang Ye, Johns Hopkins University,

More information

Mathematics for Decision Making: An Introduction. Lecture 8

Mathematics for Decision Making: An Introduction. Lecture 8 Mathematics for Decision Making: An Introduction Lecture 8 Matthias Köppe UC Davis, Mathematics January 29, 2009 8 1 Shortest Paths and Feasible Potentials Feasible Potentials Suppose for all v V, there

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning CSE 473: Artificial Intelligence Reinforcement Learning Dan Weld Today s Outline Reinforcement Learning Q-value iteration Q-learning Exploration / exploitation Linear function approximation Many slides

More information

Computer Science CPSC 322. Lecture 23 Planning Under Uncertainty and Decision Networks

Computer Science CPSC 322. Lecture 23 Planning Under Uncertainty and Decision Networks Computer Science CPSC 322 Lecture 23 Planning Under Uncertainty and Decision Networks 1 Announcements Final exam Mon, Dec. 18, 12noon Same general format as midterm Part short questions, part longer problems

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

ACCRS/QUALITY CORE CORRELATION DOCUMENT: ALGEBRA I

ACCRS/QUALITY CORE CORRELATION DOCUMENT: ALGEBRA I ACCRS/QUALITY CORE CORRELATION DOCUMENT: ALGEBRA I Revised March 25, 2013 Extend the properties of exponents to rational exponents. 1. [N-RN1] Explain how the definition of the meaning of rational exponents

More information

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees , 2009 Lecture 5: Shortest Paths & Spanning Trees University of Twente m.uetz@utwente.nl wwwhome.math.utwente.nl/~uetzm/dw/ Shortest Path Problem "#$%&'%()*%"()$#+,&- Given directed "#$%&'()*+,%+('-*.#/'01234564'.*,'7+"-%/8',&'5"4'84%#3

More information

ACCRS/QUALITY CORE CORRELATION DOCUMENT: ALGEBRA II

ACCRS/QUALITY CORE CORRELATION DOCUMENT: ALGEBRA II ACCRS/QUALITY CORE CORRELATION DOCUMENT: ALGEBRA II Revised May 2013 Perform arithmetic operations with complex numbers. 1. [N-CN1] Know there is a complex number i such that i 2 = 1, and every complex

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Bayesian Networks 2:

Bayesian Networks 2: 1/27 PhD seminar series Probabilistics in Engineering : Bayesian networks and Bayesian hierarchical analysis in engineering Conducted by Prof. Dr. Maes, Prof. Dr. Faber and Dr. Nishijima Bayesian Networks

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Bayesian decision making

Bayesian decision making Bayesian decision making Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských partyzánů 1580/3, Czech Republic http://people.ciirc.cvut.cz/hlavac,

More information

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012 CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Curriculum Scope & Sequence. Subject/Grade Level: MATHEMATICS/HIGH SCHOOL (GRADE 7, GRADE 8, COLLEGE PREP)

Curriculum Scope & Sequence. Subject/Grade Level: MATHEMATICS/HIGH SCHOOL (GRADE 7, GRADE 8, COLLEGE PREP) BOE APPROVED 9/27/11 Curriculum Scope & Sequence Subject/Grade Level: MATHEMATICS/HIGH SCHOOL Course: ALGEBRA I (GRADE 7, GRADE 8, COLLEGE PREP) Unit Duration Common Core Standards / Unit Goals Transfer

More information

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Polynomial Expressions and Functions

Polynomial Expressions and Functions Hartfield College Algebra (Version 2017a - Thomas Hartfield) Unit FOUR Page - 1 - of 36 Topic 32: Polynomial Expressions and Functions Recall the definitions of polynomials and terms. Definition: A polynomial

More information

Searching in non-deterministic, partially observable and unknown environments

Searching in non-deterministic, partially observable and unknown environments Searching in non-deterministic, partially observable and unknown environments CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence:

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Resilient Formal Synthesis

Resilient Formal Synthesis Resilient Formal Synthesis Calin Belta Boston University CDC 2017 Workshop: 30 years of the Ramadge-Wonham Theory of Supervisory Control: A Retrospective and Future Perspectives Outline Formal Synthesis

More information

CSE 473: Artificial Intelligence Spring 2014

CSE 473: Artificial Intelligence Spring 2014 CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld, Pieter Abbeel, Luke Zettelmoyer Outline Agents that

More information

Modeling and Performance Analysis with Discrete-Event Simulation

Modeling and Performance Analysis with Discrete-Event Simulation Simulation Modeling and Performance Analysis with Discrete-Event Simulation Chapter 9 Input Modeling Contents Data Collection Identifying the Distribution with Data Parameter Estimation Goodness-of-Fit

More information

Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries

Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries Anonymous Author(s) Affiliation Address email Abstract 1 2 3 4 5 6 7 8 9 10 11 12 Probabilistic

More information

Regularized optimization techniques for multistage stochastic programming

Regularized optimization techniques for multistage stochastic programming Regularized optimization techniques for multistage stochastic programming Felipe Beltrán 1, Welington de Oliveira 2, Guilherme Fredo 1, Erlon Finardi 1 1 UFSC/LabPlan Universidade Federal de Santa Catarina

More information

Computing Possibly Optimal Solutions for Multi-Objective Constraint Optimisation with Tradeoffs

Computing Possibly Optimal Solutions for Multi-Objective Constraint Optimisation with Tradeoffs Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 215) Computing Possibly Optimal Solutions for Multi-Objective Constraint Optimisation with Tradeoffs Nic

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

CHAPTER-17. Decision Tree Induction

CHAPTER-17. Decision Tree Induction CHAPTER-17 Decision Tree Induction 17.1 Introduction 17.2 Attribute selection measure 17.3 Tree Pruning 17.4 Extracting Classification Rules from Decision Trees 17.5 Bayesian Classification 17.6 Bayes

More information

x 1 + 4x 2 = 5, 7x 1 + 5x 2 + 2x 3 4,

x 1 + 4x 2 = 5, 7x 1 + 5x 2 + 2x 3 4, LUNDS TEKNISKA HÖGSKOLA MATEMATIK LÖSNINGAR LINJÄR OCH KOMBINATORISK OPTIMERING 2018-03-16 1. a) The rst thing to do is to rewrite the problem so that the right hand side of all constraints are positive.

More information

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius Doctoral Course in Speech Recognition May 2007 Kjell Elenius CHAPTER 12 BASIC SEARCH ALGORITHMS State-based search paradigm Triplet S, O, G S, set of initial states O, set of operators applied on a state

More information

CS 7180: Behavioral Modeling and Decisionmaking

CS 7180: Behavioral Modeling and Decisionmaking CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and

More information

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information

What is an inequality? What is an equation? What do the algebraic and graphical solutions to an equation or inequality represent?

What is an inequality? What is an equation? What do the algebraic and graphical solutions to an equation or inequality represent? WDHS Curriculum Map: created by Christina Berth, Jaclyn Falcone, Julia Holloway Course: RC Algebra I July 2013 Time Interval/ Content Standards/ Strands Essential Questions Skills Assessment Unit 1: Solving

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Spring 2017 Introduction to Artificial Intelligence Midterm V2 You have approximately 80 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important

More information

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes Name: Roll Number: Please read the following instructions carefully Ø Calculators are allowed. However, laptops or mobile phones are not

More information

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005 Algorithms and Complexity Theory Chapter 8: Introduction to Complexity Jules-R Tapamo Computer Science - Durban - September 2005 Contents 1 Introduction 2 1.1 Dynamic programming...................................

More information

Searching in non-deterministic, partially observable and unknown environments

Searching in non-deterministic, partially observable and unknown environments Searching in non-deterministic, partially observable and unknown environments CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2014 Soleymani Artificial Intelligence:

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

CSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches

CSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches CSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches A presentation by Evan Ettinger November 11, 2005 Outline Introduction Motivation and Background

More information

PAC Generalization Bounds for Co-training

PAC Generalization Bounds for Co-training PAC Generalization Bounds for Co-training Sanjoy Dasgupta AT&T Labs Research dasgupta@research.att.com Michael L. Littman AT&T Labs Research mlittman@research.att.com David McAllester AT&T Labs Research

More information

Geometric Steiner Trees

Geometric Steiner Trees Geometric Steiner Trees From the book: Optimal Interconnection Trees in the Plane By Marcus Brazil and Martin Zachariasen Part 3: Computational Complexity and the Steiner Tree Problem Marcus Brazil 2015

More information

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007. Spring 2007 / Page 1 Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007. Don t panic. Be sure to write your name and student ID number on every page of the exam. The only materials

More information

Register machines L2 18

Register machines L2 18 Register machines L2 18 Algorithms, informally L2 19 No precise definition of algorithm at the time Hilbert posed the Entscheidungsproblem, just examples. Common features of the examples: finite description

More information

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016 CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016 Intractable Problems There are many problems for which the best known algorithms take a very long time (e.g., exponential in some

More information

Introduction to Arti Intelligence

Introduction to Arti Intelligence Introduction to Arti Intelligence cial Lecture 4: Constraint satisfaction problems 1 / 48 Constraint satisfaction problems: Today Exploiting the representation of a state to accelerate search. Backtracking.

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

graphs, Equations, and inequalities 2

graphs, Equations, and inequalities 2 graphs, Equations, and inequalities You might think that New York or Los Angeles or Chicago has the busiest airport in the U.S., but actually it s Hartsfield-Jackson Airport in Atlanta, Georgia. In 010,

More information

Lecture 9: PGM Learning

Lecture 9: PGM Learning 13 Oct 2014 Intro. to Stats. Machine Learning COMP SCI 4401/7401 Table of Contents I Learning parameters in MRFs 1 Learning parameters in MRFs Inference and Learning Given parameters (of potentials) and

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Solutions to Exercises

Solutions to Exercises 1/13 Solutions to Exercises The exercises referred to as WS 1.1(a), and so forth, are from the course book: Williamson and Shmoys, The Design of Approximation Algorithms, Cambridge University Press, 2011,

More information

West Windsor-Plainsboro Regional School District Algebra Grade 8

West Windsor-Plainsboro Regional School District Algebra Grade 8 West Windsor-Plainsboro Regional School District Algebra Grade 8 Content Area: Mathematics Unit 1: Foundations of Algebra This unit involves the study of real numbers and the language of algebra. Using

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Probability and statistics; Rehearsal for pattern recognition

Probability and statistics; Rehearsal for pattern recognition Probability and statistics; Rehearsal for pattern recognition Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských

More information

Scenario estimation and generation

Scenario estimation and generation October 10, 2004 The one-period case Distances of Probability Measures Tensor products of trees Tree reduction A decision problem is subject to uncertainty Uncertainty is represented by probability To

More information

RL 14: POMDPs continued

RL 14: POMDPs continued RL 14: POMDPs continued Michael Herrmann University of Edinburgh, School of Informatics 06/03/2015 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally

More information

DD* Lite: Efficient Incremental Search with State Dominance

DD* Lite: Efficient Incremental Search with State Dominance DD* Lite: Efficient Incremental Search with State Dominance G. Ayorkor Mills-Tettey Anthony Stentz M. Bernardine Dias CMU-RI-TR-07-12 May 2007 Robotics Institute Carnegie Mellon University Pittsburgh,

More information

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr. Simulation Discrete-Event System Simulation Chapter 8 Input Modeling Purpose & Overview Input models provide the driving force for a simulation model. The quality of the output is no better than the quality

More information

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Markov Decision Processes and Solving Finite Problems. February 8, 2017 Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:

More information

Partitioning Metric Spaces

Partitioning Metric Spaces Partitioning Metric Spaces Computational and Metric Geometry Instructor: Yury Makarychev 1 Multiway Cut Problem 1.1 Preliminaries Definition 1.1. We are given a graph G = (V, E) and a set of terminals

More information

Review Basic Probability Concept

Review Basic Probability Concept Economic Risk and Decision Analysis for Oil and Gas Industry CE81.9008 School of Engineering and Technology Asian Institute of Technology January Semester Presented by Dr. Thitisak Boonpramote Department

More information

SFM-11:CONNECT Summer School, Bertinoro, June 2011

SFM-11:CONNECT Summer School, Bertinoro, June 2011 SFM-:CONNECT Summer School, Bertinoro, June 20 EU-FP7: CONNECT LSCITS/PSS VERIWARE Part 3 Markov decision processes Overview Lectures and 2: Introduction 2 Discrete-time Markov chains 3 Markov decision

More information

Math 381 Midterm Practice Problem Solutions

Math 381 Midterm Practice Problem Solutions Math 381 Midterm Practice Problem Solutions Notes: -Many of the exercises below are adapted from Operations Research: Applications and Algorithms by Winston. -I have included a list of topics covered on

More information

AP Calculus Chapter 9: Infinite Series

AP Calculus Chapter 9: Infinite Series AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin

More information

The algorithmic analysis of hybrid system

The algorithmic analysis of hybrid system The algorithmic analysis of hybrid system Authors: R.Alur, C. Courcoubetis etc. Course teacher: Prof. Ugo Buy Xin Li, Huiyong Xiao Nov. 13, 2002 Summary What s a hybrid system? Definition of Hybrid Automaton

More information

Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming

Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305,

More information

Milford Public Schools Curriculum. Department: Mathematics Course Name: Algebra 1 Level 2

Milford Public Schools Curriculum. Department: Mathematics Course Name: Algebra 1 Level 2 Milford Public Schools Curriculum Department: Mathematics Course Name: Algebra 1 Level 2 UNIT 1 Unit Title: Intro to Functions and Exponential Expressions Unit Description: Students explore the main functions

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Yet another bidirectional algorithm for shortest paths

Yet another bidirectional algorithm for shortest paths Yet another bidirectional algorithm for shortest paths Wim Pijls Henk Post Econometric Institute Report EI 2009-0 5 June, 2009 Abstract For finding a shortest path in a network the bidirectional A* algorithm

More information

Part III: Traveling salesman problems

Part III: Traveling salesman problems Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/282 Motivation Motivation Why do we study the TSP? c R.F. Hartl, S.N. Parragh 2/282 Motivation Motivation Why

More information

CS 4100 // artificial intelligence. Recap/midterm review!

CS 4100 // artificial intelligence. Recap/midterm review! CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks

More information

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1

Decision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1 Decision Trees Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 5 th, 2007 2005-2007 Carlos Guestrin 1 Linear separability A dataset is linearly separable iff 9 a separating

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Semi-Infinite Relaxations for a Dynamic Knapsack Problem

Semi-Infinite Relaxations for a Dynamic Knapsack Problem Semi-Infinite Relaxations for a Dynamic Knapsack Problem Alejandro Toriello joint with Daniel Blado, Weihong Hu Stewart School of Industrial and Systems Engineering Georgia Institute of Technology MIT

More information

Stochastic Shortest Path Problems

Stochastic Shortest Path Problems Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty Lecture 10: Introduction to reasoning under uncertainty Introduction to reasoning under uncertainty Review of probability Axioms and inference Conditional probability Probability distributions COMP-424,

More information

Electronic Companion to Optimal Policies for a Dual-Sourcing Inventory Problem with Endogenous Stochastic Lead Times

Electronic Companion to Optimal Policies for a Dual-Sourcing Inventory Problem with Endogenous Stochastic Lead Times e-companion to Song et al.: Dual-Sourcing with Endogenous Stochastic Lead times ec1 Electronic Companion to Optimal Policies for a Dual-Sourcing Inventory Problem with Endogenous Stochastic Lead Times

More information

CHAPTER 6 VECTOR CALCULUS. We ve spent a lot of time so far just looking at all the different ways you can graph

CHAPTER 6 VECTOR CALCULUS. We ve spent a lot of time so far just looking at all the different ways you can graph CHAPTER 6 VECTOR CALCULUS We ve spent a lot of time so far just looking at all the different ways you can graph things and describe things in three dimensions, and it certainly seems like there is a lot

More information

Parallel Performance Evaluation through Critical Path Analysis

Parallel Performance Evaluation through Critical Path Analysis Parallel Performance Evaluation through Critical Path Analysis Benno J. Overeinder and Peter M. A. Sloot University of Amsterdam, Parallel Scientific Computing & Simulation Group Kruislaan 403, NL-1098

More information

Discrete Inference and Learning Lecture 3

Discrete Inference and Learning Lecture 3 Discrete Inference and Learning Lecture 3 MVA 2017 2018 h

More information

ACCUPLACER MATH 0311 OR MATH 0120

ACCUPLACER MATH 0311 OR MATH 0120 The University of Teas at El Paso Tutoring and Learning Center ACCUPLACER MATH 0 OR MATH 00 http://www.academics.utep.edu/tlc MATH 0 OR MATH 00 Page Factoring Factoring Eercises 8 Factoring Answer to Eercises

More information

Quadratic and Other Inequalities in One Variable

Quadratic and Other Inequalities in One Variable Quadratic and Other Inequalities in One Variable If a quadratic equation is not in the standard form equaling zero, but rather uses an inequality sign ( , ), the equation is said to be a quadratic

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

Weather in the Connected Cockpit

Weather in the Connected Cockpit Weather in the Connected Cockpit What if the Cockpit is on the Ground? The Weather Story for UAS Friends and Partners of Aviation Weather November 2, 2016 Chris Brinton brinton@mosaicatm.com Outline Mosaic

More information

Data Structures in Java

Data Structures in Java Data Structures in Java Lecture 21: Introduction to NP-Completeness 12/9/2015 Daniel Bauer Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways

More information

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014 Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 2014 by Clifford A. Shaffer : Theory of Algorithms Title page : Theory of Algorithms Clifford A. Shaffer Spring 2014 Clifford

More information

Lecture Notes 1 Basic Concepts of Mathematics MATH 352

Lecture Notes 1 Basic Concepts of Mathematics MATH 352 Lecture Notes 1 Basic Concepts of Mathematics MATH 352 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM 87801 June 3, 2004 Author: Ivan Avramidi; File: absmath.tex; Date: June 11,

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

Algebra 1 Standards Curriculum Map Bourbon County Schools. Days Unit/Topic Standards Activities Learning Targets ( I Can Statements) 1-19 Unit 1

Algebra 1 Standards Curriculum Map Bourbon County Schools. Days Unit/Topic Standards Activities Learning Targets ( I Can Statements) 1-19 Unit 1 Algebra 1 Standards Curriculum Map Bourbon County Schools Level: Grade and/or Course: Updated: e.g. = Example only Days Unit/Topic Standards Activities Learning Targets ( I 1-19 Unit 1 A.SSE.1 Interpret

More information

Symbolic Variable Elimination in Discrete and Continuous Graphical Models. Scott Sanner Ehsan Abbasnejad

Symbolic Variable Elimination in Discrete and Continuous Graphical Models. Scott Sanner Ehsan Abbasnejad Symbolic Variable Elimination in Discrete and Continuous Graphical Models Scott Sanner Ehsan Abbasnejad Inference for Dynamic Tracking No one previously did this inference exactly in closed-form! Exact

More information

Stochastic Decision Diagrams

Stochastic Decision Diagrams Stochastic Decision Diagrams John Hooker CORS/INFORMS Montréal June 2015 Objective Relaxed decision diagrams provide an generalpurpose method for discrete optimization. When the problem has a dynamic programming

More information

Ambiguity in portfolio optimization

Ambiguity in portfolio optimization May/June 2006 Introduction: Risk and Ambiguity Frank Knight Risk, Uncertainty and Profit (1920) Risk: the decision-maker can assign mathematical probabilities to random phenomena Uncertainty: randomness

More information

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 20 Travelling Salesman Problem Today we are going to discuss the travelling salesman problem.

More information

Inferring the Causal Decomposition under the Presence of Deterministic Relations.

Inferring the Causal Decomposition under the Presence of Deterministic Relations. Inferring the Causal Decomposition under the Presence of Deterministic Relations. Jan Lemeire 1,2, Stijn Meganck 1,2, Francesco Cartella 1, Tingting Liu 1 and Alexander Statnikov 3 1-ETRO Department, Vrije

More information