A harmonic function on a domain Ω R n is a function which satisfies Laplace s equation:
|
|
- Thomasina Spencer
- 5 years ago
- Views:
Transcription
1 Resistors, Markov Chains & Dynamic Path Planning John S. Zelek School of Engineering, Univ. of Guelph Guelph, ON, NG W, Canada. Abstract Dynamic planning involves continuously updating a map by sensing changes in the environment and planning appropriate actions, with all tasks sharing common computational resources. We use harmonic functions for dynamic planning. Analogous representations of harmonic functions as Markov chains and resistor networks are used to develop the notion of escape probability and energy dissipation. These measures are used to indicate convergence (event that permits resources to be devote to nonplanning tasks) more robustly than monitoring maximum or average field changes between. The convergence of the harmonic function is related quadratic-ally to the number of grid elements. An example of an irregular sampling strategy - quad tree - is developed for harmonic functions, which is complete yet imprecise. Quad trees are not a sufficient sampling strategy for addressing the exponential growth of multiple dimensions and therefore current investigations include other sampling strategies or dimensional parallelization. changes between neighbouring nodes thus extending the limits of resolution plausible on a computer (i.e., there even is a constraint of 7: ratio of width:length of open space representable on 3 bit machines []). In most applications notification of convergence is essential for devoting computational resources from iteration kernel iteration to other tasks for the autonomous agent such as sensing or reasoning. In addition, most uses of harmonic functions for planning have assumed a regular grid sampling which may be inefficient depending on the environmental map configuration. Computation time increases proportional with the number of grid elements and reducing the number of grid elements is therefore beneficial. Therefore an irregular sampling strategy is desirable (a quad tree representation is shown). Sensor-based discovery path planning is the guidance of an agent - a robot - without a complete a prior map, by discovering and negotiating the environment so as to reach a goal location while avoiding all encountered obstacles. The dynamic path planning problem extends the basic navigation planning problem [5]. Introduction Typically path planning problems have been posed as an AI search problem []. One particular path planning technique we have used is based on solving Laplace s equation using iterative finite-difference equations []. This technique is equivalent to looking at the problem as random walks on finite networks which are regarded as finite state Markov chains [3]. There is also an logical connection to resistor circuits and their accompanying currents and voltages. This connection allows us to make use of electrical network theory and its tools in analyzing the path planning problem. In particular, we find that using escape probability or circuit energy dissipation measures give better indication of convergence of the field when compared with observing changes between (either average or maximum change). Other methods [] have typically looked at changes (average or maximum) between iteration field values. Theoretically, this is an adequate approach. However, it is flawed practically. Harmonic function discrete components experience small Harmonic function Path Planning The computation of a potential function from which the path is generated is performed over an occupancy grid representation of a map. It is executed as a separate process from the path executioner. The path executioner requests trajectories which are computed by performing steepest gradient descent on the potential function. Linear interpolation is used to approximate potential function values when the position of the robot is between mesh points. It is also assumed that space is void of obstacles when its actual composition is unknown. This computation has the desirable property that map features can be added or subtracted dynamically (i.e., as they are sensed) alongside with the correction of the robot s position. In addition, these events are independent of the computation and the execution of the path. As a result, localization of the robot can be performed independent of path planning and execution. These properties are available because the potential function is a harmonic function. 9
2 A harmonic function on a domain Ω R n is a function which satisfies Laplace s equation: φ = n δ φ = () i= δx i The value of φ is given on a closed domain Ω in the configuration space C. For mobile robot navigation, C corresponds to a planar set of coordinates (x, y). A harmonic function satisfies the Maximum Principle and Uniqueness Principle [3]. The Maximum Principle states that a harmonic function f(x, y) defined on Ω takes on its maximum value M and its minimum value m on the boundary and guarantees that there are no local minima in the harmonic function. The Uniqueness Principle stipulates that if f(x, y) and g(x, y) are harmonic functions on Ω such that f(x, y) = g(x, y) for all boundary points, then f(x, y) = g(x, y) for all x, y. Iterative applications of a finite difference operator are used to compute the harmonic function because it is sufficient to only consider the exact solution at a finite number of mesh points. This is in contrast to finite element methods which compute an approximation of an exact solution by continuous piece-wise polynomial functions. The obstacles and grid boundary form boundary conditions. The boundaries are fixed to a high potential value while the goal location(s) is fixed to a low potential. Points that are not boundary or goal points are allowed to fluctuate (i.e., the free space) and produce the harmonic function when computed using an iteration kernel, which is formed by taking advantage of the harmonic function s inherent averaging property, where a point is the average of its neighbouring points. A five-point iteration kernel for solving Laplace s equation is used: U i,j = (U i+,j + U i,j + U i,j+ + U i,j ) () The computation of the harmonic function can be formulated with two different types of boundary conditions. The Dirichlet boundary condition is where u is given at each point of the boundary. This inherently makes all applicable boundary points into sources (i.e., in terms of sources and sinks for modeling liquid flow). The Neumann boundary condition is where δu δn, the normal component of the gradient of u, is given at each point of the boundary. In order to have flow, there has to be a source and a sink. Thus, the boundary of the mesh is modeled as a source, and the boundary of the goal model is modeled as a sink. The boundaries of the obstacles are modeled according to the type of boundary condition chosen: Dirichlet or Neumann. For our applications, Dirichlet boundary conditions are used because of their inherent property of minimizing the hitting probability. Hitting probability minimization is ideal when uncertainty is present in obstacle and robot location mapping. Let U, define the origin of a Cartesian coordinate space. The space does not necessarily have to be confined to two dimensions. In addition, rather than Cartesian space, we could also use Configuration space C, which is a space that is expressed in terms of the degrees of freedom of the robot. If equal sampling intervals are assumed, i.e., U i+ i,j+ j = U i+,j+, then the sampling interval when i = j is determined by the size of the robot and the spacing between obstacles. The robot is modelled as a point, and obstacles are padded accordingly. Nonsymmetrical shapes (e.g., rectangular) can either be addressed using the maximum direction. However, in tight environmental situations, the minimal inscribing circle can be used and orientation constraints can be introduced into the grid. 3 Other Frameworks 3. Markov Chains Equivalent to iteration kernels, the solution of the harmonic function can be found using the method of Markov chains [3] using probability functions for a random walk on a Markov chain [3]. A Markov process is a random process whose transition probabilities at the current time do not depend on prior transitions. A finite Markov chain is a special type of change process that moves around the set of states S = {s, s,..., s r } in a Markovian fashion. When the process is in state s i, it moves with probability P ij to state s j. The transition probabilities P i,j are represented by the transition matrix. Each cell in the occupancy grid representation is a potential state for the transition matrix. In an equal spaced grid where each grid element is -connected, the probability of transition for an interior node is. If P n is the matrix P raised to the n th power, the entries Pi,j n represent the probability that the chain, started in state U i,j will be in state U i,j after n steps. A Markov chain is a regular chain if some power of the transition matrix has no zeros. States that are referred to as traps (i.e., absorbing states) are typically the goal states, the cells that were assigned to a low potential. 3. Resistor Networks Yet another alternative method for solving a harmonic function as opposed to an iteration kernel or Markov random chains is to analyze the grid network as a resistor network [3] and to also draw this interpretation into the language of a random walker on a collection of grid points. This is relevant when we are idealizing our path planner as an optimal random walker. The finite collection of points (i.e., vertices, nodes) can be viewed as a graph structure with point pairs connected by edges (also called branches). The graph structure we have been referring to is regular and -connected, meaning that all interior points of the graph are connected to four equally-spaced neighbours (to the right, left, up or down). We can also 5
3 talk about -connected networks, but we will confine the discussion to the former. Let the nodes in the graph encode voltages and the edges encode resistor values, which directly correspond to spatial distances. In a -connected network, the resistor values are always equal and set to unity. Let the voltage to the circuit be applied across the obstacles (include the boundary) and goal locations. When a unit voltage [3] is applied between h (i.e, obstacles) and l (i.e., goal(s)), making v h = and v l = (i.e., Dirichlet boundary conditions), the voltage v x at any point x represents the probability that a walker starting from x will return to h before reaching l. The voltage values can also be referred to as a hitting probability. The definition for current across a connected node directly corresponds to the gradient in that particular direction. The current I ij is proportional to the expected net number of movements along the edge from i to j where movements from j back to i are counted as negative. Steepest gradient descent represents choosing the target current value moving outwards from a particular node. The gradient in the potential (i.e., ) directions represents a current. It also corresponds to minimizing the hitting probability, which is what the voltage encodes. The resistance R i,j of an edge i, j leads us to also defining the conductance of an edge i, j as C i,j = R i,j. A random walk on G is a Markov chain with a transition matrix P given by P xy = Cxy C x where C x = y C xy. The property of a Markov chain encoded as a graph that is connected (the walker can go between any two states) is called an ergodic Markov chain. We will only consider these types of Markov chains. The high and low potentials referred to earlier, can also be cast as an applied voltage between two or more points. By Ohm s law, the voltages across the resistors is related to the current and applying Kirchoff s Current Law (requires that the total current flowing into any point either than the ones where the potential is across to be ) results in: j I i,j = (V i V j )C ij = (3) This in essence is the harmonic property that holds for all points (i.e., nodes) that are not fixed to a high or low potential. By imposing a voltage between obstacles V h and goals V l, a current I h = x I hx will flow into the circuit from the outside source. The effective resistance R EF F between the h s and l s is defined as R EF F = Vh I h. The reciprocal quantity C EF F = R EF F = Ih V h is called the effective conductance. The effective conductance is used to calculate the escape probability which is given as p escape = CEF C F h, where C h is. Thus the escape x Rhx probability can be expressed as follows: p escape = ( I h ) V R hx = ( x I hx)( x R hx) () h x V h where the numerator is just the summation of the voltages to the neighbouring nodes: x p escape = (V hx V (h )x ) (5) V h Applicable to escape probabilities is Rayleigh s Monotonicity Law and its variants. Rayleigh s Monotonicity Law states that if the resistances of a circuit are increased, the effective resistance R EF F between any two points can only increase. If they are decreased, it can only decrease. Similar rules that result out of applying the Monotonicity Law are as follows: Shorting Law: states that shorting certain sets of nodes together can only decrease the effective resistance of the network between two given nodes. Cutting Law: states that cutting certain branches can only increase the effective resistance between two given nodes. Decreasing the effective resistance, increases the escape probability while cutting decreases the escape probability. This is applicable for dynamic path planning when new static obstacles are discovered and more immediately applicable for dynamic obstacle detection and updating. The energy dissipated through a resistor is given by: I i,jr ij () Therefore, the total energy dissipation in a circuit is: and since v h i h = E = i,j I i,j(v i V j ) = i,j I i,jr i,j (7) i,j i i,jr i,j () which is equivalent to the last formulation and recall that R eff = vh i h, therefore: E = i h R eff = i,j i i,jr i,j (9) The currents in the circuit minimize the energy dissipation. We hypothesize that the escape probability gives us a measure of the the environmental complexity. Also, energy dissipation gives a measure of convergence of the solution (as opposed to monitoring error levels between ). Energy formulations have been used in the past but only in the context of including the gradient as an energy term alongside forces for dynamic arm trajectory planning and not as a convergence measure []. 5
4 Spatial Representations The circuit analogy lends itself for a discrete implementation of reality. Sampling the environment at regular intervals is the simplest. The irregular sampling method is based on a quad-tree formulation and is initiated to minimize computation for large homogeneous regions.. Regular Sampling The environment is sampled at regular intervals and encoded as an occupancy grid []. The interior of obstacles as well as the boundary of the working area is referred to as the boundary condition. All other nodes are where the path is executed. The discrete implementation lends itself for describing discrete spatial representations as well as continuous representations with interpolation. A uniformly sampled grid is natural but if implemented on a computer, a non-uniform grid may be more efficient. We have experimented with setting the goal to a low potential and setting the boundary of the working area as well as the cells occupied by obstacles to a high potential (see Figure ) Figure : Paths Generated at by. The figure shows the equal-potential contours generated (solid lines) and the various paths computed (dashed lines) from varying starting positions using steepest gradient descent for the above configuration. The resolution of the grid was by... Trajectory: At any point P (x, y) on the potential function φ, there is a vector that is the direction in which φ undergoes the greatest rate of decrease and that is referred to as the steepest descent gradient. In addition to this direction, there is actually a whole family of directions that are of descending value. Let s be a vector that has a magnitude s and points from P to P. Since s = i x + j y and the gradient of φ is φ = i δφ δx + j δφ δy, it follows that φ = ( s) φ Let s = û s, and φ = (û φ) s +..., so that φ s = û φ Taking the limit of this equation gives us δφ δφ δs = û φ. The quantity δs is called the directional derivative of φ. The maximal value of s is the direction of the steepest descent gradient - s m - and is the trajectory we have used when the target approach orientation was not specified. However, there is a family of functions bounded by the equi-potential contour at P. Let the two directions s and s define the directions of the equi-potential contour emanating from P. The minimal hitting probability path is defined by the steepest descending gradient but other paths can be systematically chosen between the equi-potential contours. In addition, rather than confining plausible directional changes to one of the eight possible neighbours of a node, quadratic interpolation is used to determine an approximation to the continuous. Let D w be the direction with greatest negative gradient of the eight directions possible when quantizing the directions into the neighbouring cells, i.e., to 3 degrees at 5 degree intervals. Let D w be the counter-clockwise neighbour and D w+ be the clockwise neighbour. Let G w, G w and G w+ be the associated gradient values. The quadratic function approximated has the form D(G) = ag + bg + c. To find the maximum of D(G), the derivative is taken with respect to G and set equal to. This results in D max and G max, where D max corresponds to the maximum interpolated gradient.. Irregular Sampling It was found that harmonic function convergence is proportional quadratic-ally with the number of grid elements []. The time taken for each iteration is linearly proportional to the number of elements while the number of required for convergence is linearly proportional the number of grid elements also. Thus reducing the number of grid elements will improve performance. Pyramids and quad-trees are a popular data structure in both graphics and image processing [7] and are best used when the dimensions of the matrix can be recursively evenly subdivided until one grid point represents the entire grid cell. Approximately 33% more nodes are required to represent the image but usually the algorithm will be a lot faster, especially if there are large open spaces. Quad-trees have also been previously used for path planning [] with the the A* algorithm. For any irregular grid sampling (one possible configuration being a quad tree representation), the iteration kernel can be rewritten as shown in Equation, where the U n s are the m neighbouring points to U i,j, each being D n distance away. n=m U i,j n= D n = n=m U n n= D n () 5
5 Typically, the grid point corresponds to the center of the grid cell it represents. Trajectories are generated by linearly interpolating to arrive at a dense local neighbourhood (at the finest sampling level in the quad tree). 5 Results (a) (b) (c) (d) 5. Escape Probability and Energy Figure demonstrates that either escape probability or energy dissipated is more reliable than either maximum or average value change of voltages between iteration as an indicator of convergence. Note that the rate of logarithmic change in the top two plots in Figure converges to zero indicating convergence while the Boote two plots in Figure do not convey this. As indicated in the algorithm used for dynamic path planning, the iteration kernel competes for resources with other tasks such as mapping and robot localization. The reason why there is no fluctuation for the escape probability or energy as compared to the other two measures is that their computations rely on current values from sources which are always larger than the current values near the goal. Current values near the goal eventually compete with the resolution of the computer storage units. log(escape probability) log(maximum change) Figure : log(energy) log(average change) Escape Probability and Dissipated Energy. as the circuit converges to its steady state value. The top left diagram shows the escape probability as the circuit shown in Figure converges to a steady state value. The top right figure shows the dissipated energy. The bottom left figure shows how the maximum change between iteration values (of voltage) changes while the bottom right figure shows the average change of voltage value between. Convergence is evident in the derivatives of the top two plots long before shown in the bottom two plots. Figure shows how the escape probability (and energy) change accordingly as obstacles are added in the environment as shown in Figure 3. Note that escape probability (e) (f) Figure 3: Dynamic progression of discovery. The collection of figures shows the potential function converged after obstacles are added. The progression should be followed from the left to right from top to bottom. See Figure for corresponding escape and energy measures. originally increases when the first obstacle is added. This is because there is high probability in an environment with no obstacles with the robot escaping. As obstacles are added escape probability decreases while energy increases and then does not change much. When an area of the environment is blocked from accessing the goal as shown in the last figure of the sequence in Figure 3, there is a slight increase in escape probability but minimal change in dissipated energy. 5. Quad tree Computation time for a regular grid is quadratic-ally proportional to the number of grid points. Therefore, if a reduction of n times fewer grid points is achieved by the quad-tree representation, then a reduction of n less time for convergence is required. Thirty-six times less operations were executed to obtain convergence for the quad map when measured as the maximum change in grid element values on successive being less than.% (see the graph in Figure 5b). (g) Discussions This paper has discussed two innovations for dynamic robot path planning using harmonic functions: () the use of escape probability (or energy dissipation) for indicating circuit convergence (and indirectly for determining blockages); and () using irregular sampling (i.e., quad tree) for reducing the number of elements processed and thus achieving faster convergence response (especially for sparsely populated environments) at the expense of slightly less than ideal paths. We are interested in extending these approaches to dimensions greater than two and incorporating non-holonomic constraints []. Irregu- (h) 53
6 log(escape probability) log(energy) log(maximum change) 5 log(average change) 5 (a) Figure : Escape Probability and Dissipated Energy are shown in the top left (escape) and top right (energy) figures while the maximum (shown in bottom left) and average voltage (shown in bottom right) are also shown. These plots correspond to the changes between of the sequence shown in Figure 3. The scale is such that the identification of convergence of escape and energy measures is not as evident as it is in Figure. It is still obvious that the convergence is still indicated by the rate of change in the curves in the escape and energy measures. lar sampling (e.g., quad tree) will not be sufficient for dimensional extension and we are currently exploring other sampling strategies or dimensional parallelization. Irregular sampling reduces the precision of the solution but the solution is still complete. This is in contrast to the probabilistic and random techniques [9]. 7 Acknowledgments The authors express thanks to funding from NSERC (National Science and Engineering Research Council) and MMO (Materials and Manufacturing of Ontario). References [] A. Stentz, The focussed d* algorithm for real-time replanning, in Proceedings of the International Joint Conference on Artificial Intelligence, (Montreal, PQ), Aug [] C. I. Connolly and R. A. Grupen, Nonholonomic path planning using harmonic functions, Tech. Rep. UM-CS-99-5, University of Massachusetts, Amherst, MA, June 99. [3] P. G. Doyle and J. L. Snell, Random Walks and Electric Networks. The Mathematical Association of America, 9. [] J. S. Zelek, SPOTT: A Real-time Distributed and Scalable Architecture for Autonomous Mobile Robot Control. PhD thesis, McGill University, Dept. of Electrical Engineering, 99. m a x. % e r r o r 3 quad grid flops (b) regular grid Figure 5: Irregular Sampling: quad tree: (a) Equi-potential Contours of Harmonic Function of the quad map. The solid path was computed with a regular grid while the dotted path used a quad representation. (b) Computations for quad-tree vs. regular Grid: The graph illustrates the maximum difference between grid elements on successive plotted against the accumulated number of flops (floating point operations) executed for the quad map. Each plotted point on the graph corresponds to successive increments of. All computations were floating point operations and the test was executed using Matlab. Actual performance speeds can be increased if some of the floating point operations are converted to integer operations. [5] J.-C. Latombe, Robot Motion Planning. Kluwer Academic Publishers, 99. [] A. Elfes, Sonar-based real-world mapping and navigation, IEEE Journal of Robotics and Automation, vol. 3, pp. 9 5, 97. [7] T. Pavlidis, Algorithms for Graphics and Image Procesiing. Computer Science Press, 9. [] D. Z. Chen, R. J. Szczerba, and J. J. Uhran, A framedquadtree approach for determining euclidean shortest paths in a -d environment, IEEE Transactions on Robotics and Automation, vol. 3, pp., October 997. [9] L. Kavraki, P.Svestka, J. Latombe, and M. Overmars, Probabilistic roadmaps for path planning in highdimensional spaces, IEEE Transactions on Robotics and Automation, vol., no., pp. 5 5, 99. 5
ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS
ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS DANIEL STRAUS Abstract This paper investigates recurrence in one dimensional random walks After proving recurrence, an alternate approach
More informationIN the field of ElectroMagnetics (EM), Boundary Value
1 A Neural Network Based ElectroMagnetic Solver Sethu Hareesh Kolluru (hareesh@stanford.edu) General Machine Learning I. BACKGROUND AND MOTIVATION IN the field of ElectroMagnetics (EM), Boundary Value
More informationProbability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles
Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University
More informationValue Function Approximation in Reinforcement Learning using the Fourier Basis
Value Function Approximation in Reinforcement Learning using the Fourier Basis George Konidaris Sarah Osentoski Technical Report UM-CS-28-19 Autonomous Learning Laboratory Computer Science Department University
More informationFeatured Articles Advanced Research into AI Ising Computer
156 Hitachi Review Vol. 65 (2016), No. 6 Featured Articles Advanced Research into AI Ising Computer Masanao Yamaoka, Ph.D. Chihiro Yoshimura Masato Hayashi Takuya Okuyama Hidetaka Aoki Hiroyuki Mizuno,
More informationRandom Walks, Electrical Networks, and Perfect Squares. Patrick John Floryance
Random Walks, Electrical Networks, and Perfect Squares Patrick John Floryance Submitted under the supervision of Victor Reiner to the University Honors Program at the University of Minnesota-Twin Cities
More information1 Random Walks and Electrical Networks
CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationGradient Sampling for Improved Action Selection and Path Synthesis
Gradient Sampling for Improved Action Selection and Path Synthesis Ian M. Mitchell Department of Computer Science The University of British Columbia November 2016 mitchell@cs.ubc.ca http://www.cs.ubc.ca/~mitchell
More informationCase Studies of Logical Computation on Stochastic Bit Streams
Case Studies of Logical Computation on Stochastic Bit Streams Peng Li 1, Weikang Qian 2, David J. Lilja 1, Kia Bazargan 1, and Marc D. Riedel 1 1 Electrical and Computer Engineering, University of Minnesota,
More informationLecture: Modeling graphs with electrical networks
Stat260/CS294: Spectral Graph Methods Lecture 16-03/17/2015 Lecture: Modeling graphs with electrical networks Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationRandom Walks and Electric Resistance on Distance-Regular Graphs
Random Walks and Electric Resistance on Distance-Regular Graphs Greg Markowsky March 16, 2011 Graphs A graph is a set of vertices V (can be taken to be {1, 2,..., n}) and edges E, where each edge is an
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationSLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters
SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters Matthew Walter 1,2, Franz Hover 1, & John Leonard 1,2 Massachusetts Institute of Technology 1 Department of Mechanical Engineering
More informationProgramming Robots in ROS Slides adapted from CMU, Harvard and TU Stuttgart
Programming Robots in ROS Slides adapted from CMU, Harvard and TU Stuttgart Path Planning Problem Given an initial configuration q_start and a goal configuration q_goal,, we must generate the best continuous
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More informationRobotics. Path Planning. University of Stuttgart Winter 2018/19
Robotics Path Planning Path finding vs. trajectory optimization, local vs. global, Dijkstra, Probabilistic Roadmaps, Rapidly Exploring Random Trees, non-holonomic systems, car system equation, path-finding
More informationA Decentralized Approach to Multi-agent Planning in the Presence of Constraints and Uncertainty
2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China A Decentralized Approach to Multi-agent Planning in the Presence of
More informationMulti-Robotic Systems
CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed
More informationR ij = 2. Using all of these facts together, you can solve problem number 9.
Help for Homework Problem #9 Let G(V,E) be any undirected graph We want to calculate the travel time across the graph. Think of each edge as one resistor of 1 Ohm. Say we have two nodes: i and j Let the
More informationToward Online Probabilistic Path Replanning
Toward Online Probabilistic Path Replanning R. Philippsen 1 B. Jensen 2 R. Siegwart 3 1 LAAS-CNRS, France 2 Singleton Technology, Switzerland 3 ASL-EPFL, Switzerland Workshop on Autonomous Robot Motion,
More informationSolving the Generalized Poisson Equation Using the Finite-Difference Method (FDM)
Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) James R. Nagel September 30, 2009 1 Introduction Numerical simulation is an extremely valuable tool for those who wish
More informationTrajectory planning and feedforward design for electromechanical motion systems version 2
2 Trajectory planning and feedforward design for electromechanical motion systems version 2 Report nr. DCT 2003-8 Paul Lambrechts Email: P.F.Lambrechts@tue.nl April, 2003 Abstract This report considers
More informationReasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence
Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence 1 Outline Reasoning under uncertainty over time Hidden Markov Models Dynamic Bayes Nets 2 Introduction So far we
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationPhysics-Aware Informative Coverage Planning for Autonomous Vehicles
Physics-Aware Informative Coverage Planning for Autonomous Vehicles Michael J. Kuhlman 1, Student Member, IEEE, Petr Švec2, Member, IEEE, Krishnanand N. Kaipa 2, Member, IEEE, Donald Sofge 3, Member, IEEE,
More informationLearning Tetris. 1 Tetris. February 3, 2009
Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are
More informationMarkov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.
CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called
More informationPath Integral Stochastic Optimal Control for Reinforcement Learning
Preprint August 3, 204 The st Multidisciplinary Conference on Reinforcement Learning and Decision Making RLDM203 Path Integral Stochastic Optimal Control for Reinforcement Learning Farbod Farshidian Institute
More informationThe Markov Decision Process (MDP) model
Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the
More informationA PROVABLY CONVERGENT DYNAMIC WINDOW APPROACH TO OBSTACLE AVOIDANCE
Submitted to the IFAC (b 02), September 2001 A PROVABLY CONVERGENT DYNAMIC WINDOW APPROACH TO OBSTACLE AVOIDANCE Petter Ögren,1 Naomi E. Leonard,2 Division of Optimization and Systems Theory, Royal Institute
More informationLecture XI. Approximating the Invariant Distribution
Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.
More informationTentamen TDDC17 Artificial Intelligence 20 August 2012 kl
Linköpings Universitet Institutionen för Datavetenskap Patrick Doherty Tentamen TDDC17 Artificial Intelligence 20 August 2012 kl. 08-12 Points: The exam consists of exercises worth 32 points. To pass the
More informationMath 216 Final Exam 24 April, 2017
Math 216 Final Exam 24 April, 2017 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that
More informationParallelism in Structured Newton Computations
Parallelism in Structured Newton Computations Thomas F Coleman and Wei u Department of Combinatorics and Optimization University of Waterloo Waterloo, Ontario, Canada N2L 3G1 E-mail: tfcoleman@uwaterlooca
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationElectrical Formulation of the Type Problem: To determine p (r)
Recurrence vs Transience in Dimensions 2 and 3 Lin Zhao Department of Mathematics Dartmouth College March 20 In 92 George Polya investigated random walks on lattices. If upon reaching any vertex of the
More informationVlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Uncertainty representation Localization Chapter 5 (textbook) What is the course about?
More informationIEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 5, MAY 1998 631 Centralized and Decentralized Asynchronous Optimization of Stochastic Discrete-Event Systems Felisa J. Vázquez-Abad, Christos G. Cassandras,
More informationGross Motion Planning
Gross Motion Planning...given a moving object, A, initially in an unoccupied region of freespace, s, a set of stationary objects, B i, at known locations, and a legal goal position, g, find a sequence
More informationNumerical method for approximating the solution of an IVP. Euler Algorithm (the simplest approximation method)
Section 2.7 Euler s Method (Computer Approximation) Key Terms/ Ideas: Numerical method for approximating the solution of an IVP Linear Approximation; Tangent Line Euler Algorithm (the simplest approximation
More informationPlanning by Probabilistic Inference
Planning by Probabilistic Inference Hagai Attias Microsoft Research 1 Microsoft Way Redmond, WA 98052 Abstract This paper presents and demonstrates a new approach to the problem of planning under uncertainty.
More informationStochastic Realization of Binary Exchangeable Processes
Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,
More informationSuccinct Data Structures for Approximating Convex Functions with Applications
Succinct Data Structures for Approximating Convex Functions with Applications Prosenjit Bose, 1 Luc Devroye and Pat Morin 1 1 School of Computer Science, Carleton University, Ottawa, Canada, K1S 5B6, {jit,morin}@cs.carleton.ca
More informationPlanning With Information States: A Survey Term Project for cs397sml Spring 2002
Planning With Information States: A Survey Term Project for cs397sml Spring 2002 Jason O Kane jokane@uiuc.edu April 18, 2003 1 Introduction Classical planning generally depends on the assumption that the
More informationInput layer. Weight matrix [ ] Output layer
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons
More informationAn Adaptive Clustering Method for Model-free Reinforcement Learning
An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at
More informationRobotics. Path Planning. Marc Toussaint U Stuttgart
Robotics Path Planning Path finding vs. trajectory optimization, local vs. global, Dijkstra, Probabilistic Roadmaps, Rapidly Exploring Random Trees, non-holonomic systems, car system equation, path-finding
More information1 ** The performance objectives highlighted in italics have been identified as core to an Algebra II course.
Strand One: Number Sense and Operations Every student should understand and use all concepts and skills from the pervious grade levels. The standards are designed so that new learning builds on preceding
More informationMAE 598: Multi-Robot Systems Fall 2016
MAE 598: Multi-Robot Systems Fall 2016 Instructor: Spring Berman spring.berman@asu.edu Assistant Professor, Mechanical and Aerospace Engineering Autonomous Collective Systems Laboratory http://faculty.engineering.asu.edu/acs/
More informationCHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 135, FIM 72 GU, PhD Time: Place: Teachers: Allowed material: Not allowed: October 23, 217, at 8 3 12 3 Lindholmen-salar
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Pacman Sonar (P4) [Demo: Pacman Sonar
More informationLecture 15. Probabilistic Models on Graph
Lecture 15. Probabilistic Models on Graph Prof. Alan Yuille Spring 2014 1 Introduction We discuss how to define probabilistic models that use richly structured probability distributions and describe how
More informationRandom walks and anisotropic interpolation on graphs. Filip Malmberg
Random walks and anisotropic interpolation on graphs. Filip Malmberg Interpolation of missing data Assume that we have a graph where we have defined some (real) values for a subset of the nodes, and that
More informationTennessee s State Mathematics Standards - Algebra I
Domain Cluster Standards Scope and Clarifications Number and Quantity Quantities The Real (N Q) Number System (N-RN) Use properties of rational and irrational numbers Reason quantitatively and use units
More informationLecture 8: Boundary Integral Equations
CBMS Conference on Fast Direct Solvers Dartmouth College June 23 June 27, 2014 Lecture 8: Boundary Integral Equations Gunnar Martinsson The University of Colorado at Boulder Research support by: Consider
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationDistributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract
More informationAlgebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions.
Standard 1: Relations and Functions Students graph relations and functions and find zeros. They use function notation and combine functions by composition. They interpret functions in given situations.
More informationMATHEMATICS (MIDDLE GRADES AND EARLY SECONDARY)
MATHEMATICS (MIDDLE GRADES AND EARLY SECONDARY) l. Content Domain Mathematical Processes and Number Sense Range of Competencies Approximate Percentage of Test Score 0001 0003 24% ll. Patterns, Algebra,
More information13 Path Planning Cubic Path P 2 P 1. θ 2
13 Path Planning Path planning includes three tasks: 1 Defining a geometric curve for the end-effector between two points. 2 Defining a rotational motion between two orientations. 3 Defining a time function
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationLet s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.
Finite State Machines Introduction Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc. Such devices form
More informationProgressive Algebraic Soft-Decision Decoding of Reed-Solomon Codes
Progressive Algebraic Soft-Decision Decoding of Reed-Solomon Codes Li Chen ( 陈立 ), PhD, MIEEE Associate Professor, School of Information Science and Technology Sun Yat-sen University, China Joint work
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one
More informationA graph contains a set of nodes (vertices) connected by links (edges or arcs)
BOLTZMANN MACHINES Generative Models Graphical Models A graph contains a set of nodes (vertices) connected by links (edges or arcs) In a probabilistic graphical model, each node represents a random variable,
More informationParticle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics
Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error
More informationMulticlass Classification-1
CS 446 Machine Learning Fall 2016 Oct 27, 2016 Multiclass Classification Professor: Dan Roth Scribe: C. Cheng Overview Binary to multiclass Multiclass SVM Constraint classification 1 Introduction Multiclass
More informationChapter 2 Direct Current Circuits
Chapter 2 Direct Current Circuits 2.1 Introduction Nowadays, our lives are increasingly dependent upon the availability of devices that make extensive use of electric circuits. The knowledge of the electrical
More informationSequential Monte Carlo in the machine learning toolbox
Sequential Monte Carlo in the machine learning toolbox Working with the trend of blending Thomas Schön Uppsala University Sweden. Symposium on Advances in Approximate Bayesian Inference (AABI) Montréal,
More informationPolicy Gradient Reinforcement Learning for Robotics
Policy Gradient Reinforcement Learning for Robotics Michael C. Koval mkoval@cs.rutgers.edu Michael L. Littman mlittman@cs.rutgers.edu May 9, 211 1 Introduction Learning in an environment with a continuous
More informationUse estimation strategies reasonably and fluently while integrating content from each of the other strands. PO 1. Recognize the limitations of
for Strand 1: Number and Operations Concept 1: Number Sense Understand and apply numbers, ways of representing numbers, and the relationships among numbers and different number systems. PO 1. Solve problems
More informationSequential Logic Optimization. Optimization in Context. Algorithmic Approach to State Minimization. Finite State Machine Optimization
Sequential Logic Optimization! State Minimization " Algorithms for State Minimization! State, Input, and Output Encodings " Minimize the Next State and Output logic Optimization in Context! Understand
More informationOnline Estimation of Discrete Densities using Classifier Chains
Online Estimation of Discrete Densities using Classifier Chains Michael Geilke 1 and Eibe Frank 2 and Stefan Kramer 1 1 Johannes Gutenberg-Universtität Mainz, Germany {geilke,kramer}@informatik.uni-mainz.de
More informationSubject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)
Subject: Optimal Control Assignment- (Related to Lecture notes -). Design a oil mug, shown in fig., to hold as much oil possible. The height and radius of the mug should not be more than 6cm. The mug must
More informationFinal Exam, Machine Learning, Spring 2009
Name: Andrew ID: Final Exam, 10701 Machine Learning, Spring 2009 - The exam is open-book, open-notes, no electronics other than calculators. - The maximum possible score on this exam is 100. You have 3
More informationMatrix Assembly in FEA
Matrix Assembly in FEA 1 In Chapter 2, we spoke about how the global matrix equations are assembled in the finite element method. We now want to revisit that discussion and add some details. For example,
More informationBasics of reinforcement learning
Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system
More informationB Elements of Complex Analysis
Fourier Transform Methods in Finance By Umberto Cherubini Giovanni Della Lunga Sabrina Mulinacci Pietro Rossi Copyright 21 John Wiley & Sons Ltd B Elements of Complex Analysis B.1 COMPLEX NUMBERS The purpose
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationElectrical networks and Markov chains
A.A. Peters Electrical networks and Markov chains Classical results and beyond Masterthesis Date master exam: 04-07-016 Supervisors: Dr. L. Avena & Dr. S. Taati Mathematisch Instituut, Universiteit Leiden
More informationLOCAL NAVIGATION. Dynamic adaptation of global plan to local conditions A.K.A. local collision avoidance and pedestrian models
LOCAL NAVIGATION 1 LOCAL NAVIGATION Dynamic adaptation of global plan to local conditions A.K.A. local collision avoidance and pedestrian models 2 LOCAL NAVIGATION Why do it? Could we use global motion
More informationChapter 4 Statics and dynamics of rigid bodies
Chapter 4 Statics and dynamics of rigid bodies Bachelor Program in AUTOMATION ENGINEERING Prof. Rong-yong Zhao (zhaorongyong@tongji.edu.cn) First Semester,2014-2015 Content of chapter 4 4.1 Static equilibrium
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationState Space Compression with Predictive Representations
State Space Compression with Predictive Representations Abdeslam Boularias Laval University Quebec GK 7P4, Canada Masoumeh Izadi McGill University Montreal H3A A3, Canada Brahim Chaib-draa Laval University
More informationA new 9-point sixth-order accurate compact finite difference method for the Helmholtz equation
A new 9-point sixth-order accurate compact finite difference method for the Helmholtz equation Majid Nabavi, M. H. Kamran Siddiqui, Javad Dargahi Department of Mechanical and Industrial Engineering, Concordia
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationArizona Mathematics Standards Articulated by Grade Level (2008) for College Work Readiness (Grades 11 and 12)
Strand 1: Number and Operations Concept 1: Number Sense Understand and apply numbers, ways of representing numbers, and the relationships among numbers and different number systems. College Work Readiness
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions
More informationSpace-Variant Computer Vision: A Graph Theoretic Approach
p.1/65 Space-Variant Computer Vision: A Graph Theoretic Approach Leo Grady Cognitive and Neural Systems Boston University p.2/65 Outline of talk Space-variant vision - Why and how of graph theory Anisotropic
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationChapter 1 Introduction
Chapter 1 Introduction 1.1 Introduction to Chapter This chapter starts by describing the problems addressed by the project. The aims and objectives of the research are outlined and novel ideas discovered
More informationZangwill s Global Convergence Theorem
Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given
More informationAnnouncements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example
CS 88: Artificial Intelligence Fall 29 Lecture 9: Hidden Markov Models /3/29 Announcements Written 3 is up! Due on /2 (i.e. under two weeks) Project 4 up very soon! Due on /9 (i.e. a little over two weeks)
More information