5.3.2 Sampling Algorithm RobotMotion
|
|
- Joel Dixon
- 6 years ago
- Views:
Transcription
1 122 5 RobotMotion (a) (b) (c) Figure 5.3 The velocity motion model, for different noise parameter settings. The function prob(x, b 2 ) models the motion error. It computes the probability of its parameter x under a zero-centered random variable with variance b 2.TwopossibleimplementationsareshowninTable5.2,forerror variables with normal distribution and triangular distribution, respectively. Figure 5.3 shows graphical examples of the velocity motion model, projected into x-y-space. In all three cases, the robot sets the same translational and angular velocity. Figure 5.3a shows the resulting distribution with moderateerrorparameters α 1 to α 6. ThedistributionshowninFigure5.3bis obtainedwithsmallerangularerror(parameters α 3 and α 4 )butlargertranslationalerror(parameters α 1 and α 2 ). Figure5.3cshowsthedistribution under large angular and small translational error Sampling Algorithm For particle filters(c.f. Chapter 4.3), it suffices to sample from the motion model p(x t u t,x t 1 ),insteadofcomputingtheposteriorforarbitrary x t, u t and x t 1. Samplingfromaconditionaldensityisdifferentthancalculating thedensity: Insampling,oneisgiven u t and x t 1 andseekstogenerate arandom x t drawnaccordingtothemotionmodel p(x t u t,x t 1 ). When calculatingthedensity,oneisalsogiven x t generatedthroughothermeans, andoneseekstocomputetheprobabilityof x t under p(x t u t,x t 1 ). The algorithm sample_motion_model_velocity in Table 5.3 generates randomsamplesfrom p(x t u t,x t 1 )forafixedcontrol u t andpose x t 1. It accepts x t 1 and u t asinputandgeneratesarandompose x t accordingto thedistribution p(x t u t,x t 1 ).Line2through4 perturb thecommanded control parameters by noise, drawn from the error parameters of the kinematic motion model. The noise values are then used to generate the sample s
2 5.3 Velocity Motion Model 123 1: Algorithmmotion_model_velocity(x t,u t,x t 1 ): 2: µ = 1 2 (x x )cos θ + (y y )sin θ (y y )cos θ (x x )sin θ 3: x = x + x + µ(y y ) 2 4: y = y + y + µ(x x) 2 5: r = (x x ) 2 + (y y ) 2 6: θ = atan2(y y,x x ) atan2(y y,x x ) 7: ˆv = θ t r 8: ˆω = θ t 9: ˆγ = θ θ t ˆω 10: return prob( v ˆv,α 1 v 2 + α 2 ω 2 ) prob( ω ˆω,α 3 v 2 + α 4 ω 2 ) prob( ˆγ,α 5 v 2 + α 6 ω 2 ) Table5.1 Algorithmforcomputing p(x t u t, x t 1)basedonvelocityinformation. Hereweassume x t 1isrepresentedbythevector (x y θ) T ; x tisrepresentedby (x y θ ) T ;and u tisrepresentedbythevelocityvector (v ω) T. Thefunction prob(a, b 2 )computestheprobabilityofitsargument aunderazero-centereddistributionwithvariance b 2.Itmaybeimplementedusinganyofthealgorithmsin Table : Algorithmprob_normal_distribution(a, b 2 ): { 1 2: return exp 1 a 2 } 2π b 2 2 b 2 3: Algorithmprob_triangular_distribution(a, b 2 ): { 1 4: return max 0, a } 6 b 6 b 2 Table 5.2 Algorithms for computing densities of a zero-centered normal distributionandatriangulardistributionwith variance b 2.
3 124 5 RobotMotion 1: Algorithmsample_motion_model_velocity(u t,x t 1 ): 2: ˆv = v + sample( α 1 v 2 + α 2 ω 2 ) 3: ˆω = ω + sample( α 3 v 2 + α 4 ω 2 ) 4: ˆγ = sample( α 5 v 2 + α 6 ω 2 ) 5: x = x ˆvˆω sinθ + ˆvˆω sin(θ + ˆω t) 6: y = y + ˆvˆω cos θ ˆvˆω cos(θ + ˆω t) 7: θ = θ + ˆω t + ˆγ t 8: return x t = (x,y,θ ) T Table5.3 Algorithmforsamplingposes x t = (x y θ ) T fromapose x t 1 = (x y θ) T andacontrol u t = (v ω) T.Notethatweareperturbingthefinalorientation byanadditionalrandomterm, ˆγ. Thevariables α 1through α 6aretheparameters ofthemotionnoise. Thefunctionsample( b 2 )generatesarandomsamplefroma zero-centereddistributionwith variance b 2. Itmay,forexample,beimplemented using the algorithms in Table : Algorithmsample_normal_distribution( b 2 ): 2: return i=1 rand( b, b) 3: Algorithmsample_triangular_distribution( b 2 ): 6 4: return [rand( b,b) + rand( b,b)] 2 Table 5.4 Algorithm for sampling from(approximate) normal and triangular distributionswithzeromeanand variance b 2 ;seewinkler(1995:p293).thefunction rand(x, y) is assumed to be a pseudo random number generator with uniform distributionin [x, y].
4 5.3 Velocity Motion Model 125 (a) (b) (c) Figure 5.4 Sampling from the velocity motion model, using the same parameters as in Figure 5.3. Each diagram shows 500 samples. new pose, in lines 5 through 7. Thus, the sampling procedure implements a simple physical robot motion model that incorporates control noise in its prediction, in just about the most straightforward way. Figure 5.4 illustrates the outcome of this sampling routine. It depicts 500 samples generated by sample_motion_model_velocity. The reader might want to compare this figure with the density depicted in Figure 5.3. Wenotethatinmanycases,itiseasiertosample x t thancalculatethedensityofagiven x t.thisisbecausesamplesrequireonlyaforwardsimulation of the physical motion model. To compute the probability of a hypothetical pose amounts to retro-guessing of the error parameters, which requires us to calculate the inverse of the physical motion model. The fact that particle filters rely on sampling makes them specifically attractive from an implementation point of view Mathematical Derivation of the Velocity Motion Model We will now derive the algorithms motion_model_velocity and sample_motion_model_velocity. As usual, the reader not interested in the mathematical details is invited to skip this section at first reading, and continue in Chapter 5.4(page 132). The derivation begins with a generative model of robot motion, and then derives formulae for sampling and computing p(x t u t,x t 1 )forarbitrary x t, u t,and x t 1. Exact Motion
5 126 5 RobotMotion y <x c,y c> r θ 90 x θ <x,y> Figure 5.5 Motion carried out by a noise-free robot moving with constant velocities vand ωandstartingat (x y θ) T. (5.5) Before turning to the probabilistic case, let us begin by stating the kinematics foranideal,noise-freerobot.let u t = (v ω) T denotethecontrolattime t.if bothvelocitiesarekeptatafixedvaluefortheentiretimeinterval (t 1,t], therobotmovesonacirclewithradius r = v ω This follows from the general relationship between the translational and rotational velocities v and ω for an arbitrary object moving on a circular trajectory with radius r: (5.6) v = ω r Equation(5.5) encompasses the case where the robot does not turn at all(i.e., ω = 0),inwhichcasetherobotmovesonastraightline. Astraightline correspondstoacirclewithinfiniteradius,hencewenotethat rmaybe infinite. Let x t 1 = (x,y,θ) T betheinitialposeoftherobot,andsupposewekeep thevelocityconstantat (v ω) T forsometime t.asoneeasilyshows,the centerofthecircleisat (5.7) (5.8) x c = x v ω sin θ y c = y + v ω cos θ
6 5.3 Velocity Motion Model 127 (5.9) (5.10) Thevariables (x c y c ) T denotethiscoordinate.after ttimeofmotion,our idealrobotwillbeat x t = (x,y,θ ) T with x x c + v y ω sin(θ + ω t) = y c v θ ω cos(θ + ω t) θ + ω t x v ω sinθ + v ω sin(θ + ω t) = y + v ω cos θ v ω cos(θ + ω t) θ ω t The derivation of this expression follows from simple trigonometry: After tunitsoftime,thenoise-freerobothasprogressed v talongthecircle, whichcauseditsheadingdirectiontoturnby ω t.atthesametime,its x and ycoordinateisgivenbytheintersectionofthecircleabout (x c y c ) T,and theraystartingat (x c y c ) T attheangleperpendicularto ω t.thesecond transformation simply substitutes(5.8) into the resulting motion equations. Of course, real robots cannot jump from one velocity to another, and keep velocity constant in each time interval. To compute the kinematics with nonconstant velocities, it is therefore common practice to use small values for t, and to approximate the actual velocity by a constant within each time interval. The(approximate) final pose is then obtained by concatenating the corresponding cyclic trajectories using the mathematical equations just stated. Real Motion In reality, robot motion is subject to noise. The actual velocities differ from the commanded ones(or measured ones, if the robot possesses a sensor for measuring velocity). We will model this difference by a zero-centered random variable with finite variance. More precisely, let us assume the actual velocities are given by ( ˆv ˆω ) = ( v ω ) + ( εα1v 2 +α 2ω 2 ε α3v 2 +α 4ω 2 ) Here ε b 2 isazero-meanerrorvariablewith variance b 2. Thus,thetrue velocity equals the commanded velocity plus some small, additive error (noise). In our model, the standard deviation of the error is proportional tothecommandedvelocity. Theparameters α 1 to α 4 (with α i 0for i = 1,...,4)arerobot-specificerrorparameters. Theymodeltheaccuracy of the robot. The less accurate a robot, the larger these parameters.
7 128 5 RobotMotion (a) (b) -b b -b b Figure5.6 Probabilitydensityfunctionswith variance b 2 :(a)normaldistribution, (b) triangular distribution. NORMAL DISTRIBUTION (5.11) Twocommonchoicesfortheerror ε b 2 arethenormalandthetriangular distribution. Thenormaldistributionwithzeromeanand variance b 2 isgivenbythe density function ε b 2 (a) = 1 1 a 2 2π b 2 e 2 b 2 Figure 5.6a shows the density function of a normal distribution with variance b 2. Normaldistributionsarecommonlyusedtomodelnoisein continuous stochastic processes. Its support, which is the set of points a with p(a) > 0,is R. TRIANGULAR Thedensityofatriangulardistributionwithzeromeanand variance b 2 is DISTRIBUTION given by (5.12) ε b 2 (a) = { } 1 max 0, 6 b a 6 b 2 (5.13) whichisnon-zeroonlyin ( 6b; 6b).AsFigure5.6bsuggests,thedensity resembles the shape of a symmetric triangle hence the name. Abettermodeloftheactualpose x t = (x y θ ) T afterexecutingthe motioncommand u t = (v ω) T at x t 1 = (x y θ) T isthus x y θ = x y θ + ˆvˆω sin θ + ˆvˆω sin(θ + ˆω t) ˆv ˆω cos θ ˆvˆω cos(θ + ˆω t) ˆω t Thisequationisobtainedbysubstitutingthecommandedvelocity u t = (v ω) T withthenoisymotion (ˆv ˆω) T in(5.9).however,thismodelisstillnot very realistic, for reasons discussed in turn.
8 5.3 Velocity Motion Model 129 Final Orientation (5.14) The two equations given above exactly describe the final location of the robot given that the robot actually moves on an exact circular trajectory with radius r = ˆvˆω. Whiletheradiusofthiscircularsegmentandthedistance traveled is influenced by the control noise, the very fact that the trajectory is circular is not. The assumption of circular motion leads to an important degeneracy.inparticular,thesupportofthedensity p(x t u t,x t 1 )istwodimensional, within a three-dimensional embedding pose space. The fact that all posterior poses are located on a two-dimensional manifold within the three-dimensional pose space is a direct consequence of the fact that we used only two noise variables, one for v and one for ω. Unfortunately, this degeneracy has important ramifications when applying Bayes filters for state estimation. In reality, any meaningful posterior distribution is of course not degenerate, and poses can be found within a three-dimensional space of variations in x, y,and θ.togeneralizeourmotionmodelaccordingly,wewillassume thattherobotperformsarotation ˆγwhenitarrivesatitsfinalpose. Thus, insteadofcomputing θ accordingto(5.13),wemodelthefinalorientation by θ = θ + ˆω t + ˆγ t with (5.15) ˆγ = ε α5v 2 +α 6ω 2 Here α5and α6areadditionalrobot-specificparametersthatdeterminethe variance of the additional rotational noise. Thus, the resulting motion model is as follows: (5.16) x y θ = x y θ + ˆvˆω sinθ + ˆvˆω sin(θ + ˆω t) ˆv ˆω Computationof p(x t u t, x t 1 ) cos θ ˆvˆω cos(θ + ˆω t) ˆω t + ˆγ t The algorithm motion_model_velocity in Table 5.1 implements the computationof p(x t u t,x t 1 )forgivenvaluesof x t 1 = (x y θ) T, u t = (v ω) T, and x t = (x y θ ) T.Thederivationofthisalgorithmissomewhatinvolved, as it effectively implements an inverse motion model. In particular, motion_model_velocitydeterminesmotionparameters û t = (ˆv ˆω) T fromthe
9 130 5 RobotMotion (5.17) (5.18) (5.19) (5.20) poses x t 1 and x t,alongwithanappropriatefinalrotation ˆγ.Ourderivation makesitobviousastowhyafinalrotationisneeded:foralmostallvalues of x t 1, u t,and x t,themotionprobabilitywouldsimplybezerowithout allowing for a final rotation. Letuscalculatetheprobability p(x t u t,x t 1 )ofcontrolaction u t = (v ω) T carryingtherobotfromthepose x t 1 = (x y θ) T tothepose x t = (x y θ ) T within ttimeunits.todoso,wewillfirstdeterminethecontrol û = (ˆv ˆω) T requiredtocarrytherobotfrom x t 1 toposition (x y ),regardlessofthe robot s final orientation. Subsequently, we will determine the final rotation ˆγnecessaryfortherobottoattaintheorientation θ.basedonthesecalculations,wecantheneasilycalculatethedesiredprobability p(x t u t,x t 1 ). Thereadermayrecallthatourmodelassumesthattherobottravelswith a fixed velocity during t, resulting in a circular trajectory. For a robot that movedfrom x t 1 = (x y θ) T to x t = (x y ) T,thecenterofthecircleis definedas (x y ) T andgivenby ( x y ) = ( x y ) ( λ sin θ + λ cos θ ) = ( x+x 2 + µ(y y ) y+y 2 + µ(x x) forsomeunknown λ,µ R. Thefirstequalityistheresultofthefactthat the circle s center is orthogonal to the initial heading direction of the robot; the second is a straightforward constraint that the center of the circle lies on araythatliesonthehalf-waypointbetween (x y) T and (x y ) T andis orthogonal to the line between these coordinates. Usually, Equation(5.17) has a unique solution except in the degenerate caseof ω = 0,inwhichthecenterofthecircleliesatinfinity.Asthereader mightwanttoverify,thesolutionisgivenby µ = 1 2 and hence ( ) x = y (x x )cos θ + (y y )sin θ (y y )cos θ (x x )sin θ ( x+x y+y (x x ) cos θ+(y y ) sin θ (y y ) cos θ (x x ) sin θ (y y ) (x x ) cos θ+(y y ) sin θ (y y ) cos θ (x x ) sin θ (x x) TheradiusofthecircleisnowgivenbytheEuclideandistance r = (x x ) 2 + (y y ) 2 = (x x ) 2 + (y y ) 2 ) ) Furthermore, we can now calculate the change of heading direction (5.21) θ = atan2(y y,x x ) atan2(y y,x x )
10 5.3 Velocity Motion Model 131 (5.22) (5.23) (5.24) Here atan2 is the common extension of the arcus tangens of y/x extended tothe R 2 (mostprogramminglanguagesprovideanimplementationofthis function): atan2(y, x) = atan(y/x) if x > 0 sign(y) (π atan( y/x )) if x < 0 0 if x = y = 0 sign(y) π/2 if x = 0,y 0 Since we assume that the robot follows a circular trajectory, the translational distancebetween x t and x t 1 alongthiscircleis dist = r θ From distand θ,itisnoweasytocomputethevelocities ˆvand ˆω: ( ) ( ) ˆv dist û t = = t 1 ˆω θ Therotationalvelocity ˆγneededtoachievethefinalheading θ oftherobotin (x y )within t canbedeterminedaccordingto(5.14)as: (5.25) ˆγ = t 1 (θ θ) ˆω Themotionerroristhedeviationof û t and ˆγfromthecommandedvelocity u t = (v ω) T and γ = 0,asdefinedinEquations(5.24)and(5.25). (5.26) (5.27) (5.28) v err = v ˆv ω err = ω ˆω γ err = ˆγ Under our error model, specified in Equations(5.10), and(5.15), these errors have the following probabilities: (5.29) (5.30) (5.31) ε α1v 2 +α 2ω 2 (v err) ε α3v 2 +α 4ω 2 (ω err) ε α5v 2 +α 6ω 2 (γ err) where ε b 2 denotesazero-meanerrorvariablewith variance b 2,asbefore. Since we assume independence between the different sources of error, the desiredprobability p(x t u t,x t 1 )istheproductoftheseindividualerrors: (5.32) p(x t u t,x t 1 ) = ε α1v 2 +α 2ω 2 (v err) ε α3v 2 +α 4ω 2 (ω err) ε α5v 2 +α 6ω 2 (γ err)
11 132 5 RobotMotion To see the correctness of the algorithm motion_model_velocity in Table 5.1, the reader may notice that this algorithm implements this expression. More specifically, lines 2 to 9 are equivalent to Equations(5.18),(5.19),(5.20),(5.21), (5.24), and(5.25). Line 10 implements(5.32), substituting the error terms as specified in Equations(5.29) to(5.31). Samplingfrom p(x u, x) The sampling algorithm sample_motion_model_velocity in Table 5.3 implementsaforwardmodel,asdiscussedearlierinthissection.lines5through7 correspond to Equation(5.16). The noisy values calculated in lines 2 through 4 correspond to Equations(5.10) and(5.15). The algorithm sample_normal_distribution in Table 5.4 implements a common approximation to sampling from a normal distribution. This approximation exploits the central limit theorem, which states that any average of non-degenerate random variables converges to a normal distribution. By averaging 12 uniform distributions, sample_normal_distribution generates values that are approximately normal distributed; though technically the resulting values lie always in [ 2b, 2b]. Finally, sample_triangular_distribution in Table 5.4 implements a sampler for triangular distributions. 5.4 Odometry Motion Model The velocity motion model discussed thus far uses the robot s velocity to compute posteriors over poses. Alternatively, one might want to use the odometry measurements as the basis for calculating the robot s motion over time. Odometry is commonly obtained by integrating wheel encoder information; most commercial robots make such integrated pose estimation availableinperiodictimeintervals(e.g.,everytenthofasecond). Thisleadsto a second motion model discussed in this chapter, the odometry motion model. The odometry motion model uses odometry measurements in lieu of controls. Practical experience suggests that odometry, while still erroneous, is usually more accurate than velocity. Both suffer from drift and slippage, but velocity additionally suffers from the mismatch between the actual motion controllers and its(crude) mathematical model. However, odometry is only available in retrospect, after the robot moved. This poses no problem for fil-
12 5.4 Odometry Motion Model 133 δ rot2 δ trans δ rot1 Figure5.7 Odometrymodel:Therobotmotioninthetimeinterval (t 1, t]isapproximatedbyarotation δ rot1,followedbyatranslation δ transandasecondrotation δ rot2.theturnsandtranslationsarenoisy. ter algorithms, such as the localization and mapping algorithms discussed in later chapters. But it makes this information unusable for accurate motion planning and control Closed Form Calculation Technically, odometric information are sensor measurements, not controls. To model odometry as measurements, the resulting Bayes filter would have to include the actual velocity as state variables which increases the dimensionofthestatespace.tokeepthestatespacesmall,itisthereforecommon toconsiderodometrydataasifitwerecontrolsignals. Inthissection,we will treat odometry measurements just like controls. The resulting model is at the core of many of today s best probabilistic robot systems. Letusdefinetheformatofourcontrolinformation.Attime t,thecorrect poseoftherobotismodeledbytherandomvariable x t.therobotodometryestimatesthispose;however,duetodriftandslippagethereisnofixed coordinate transformation between the coordinates used by the robot s internal odometry and the physical world coordinates. In fact, knowing this transformation would solve the robot localization problem! The odometry model uses the relative motion information, as measured by the robot s internal odometry. More specifically, in the time interval (t 1, t], therobotadvancesfromapose x t 1 topose x t.theodometryreportsback tousarelatedadvancefrom x t 1 = ( x ȳ θ) T to x t = ( x ȳ θ ) T.Herethe
13 134 5 RobotMotion 1: Algorithmmotion_model_odometry(x t,u t,x t 1 ): 2: δ rot1 = atan2(ȳ ȳ, x x) θ 3: δ trans = ( x x ) 2 + (ȳ ȳ ) 2 4: δ rot2 = θ θ δ rot1 5: ˆδrot1 = atan2(y y,x x) θ 6: ˆδtrans = (x x ) 2 + (y y ) 2 7: ˆδrot2 = θ θ ˆδ rot1 8: p 1 = prob(δ rot1 ˆδ rot1,α 1 ˆδ2 rot1 + α 2 ˆδ2 trans ) 9: p 2 = prob(δ trans ˆδ trans,α 3 ˆδ2 trans + α 4 ˆδ2 rot1 + α 4 ˆδ2 rot2 ) 10: p 3 = prob(δ rot2 ˆδ rot2,α 1 ˆδ2 rot2 + α 2 ˆδ2 trans ) 11: return p 1 p 2 p 3 Table5.5 Algorithmforcomputing p(x t u t, x t 1)basedonodometryinformation. Herethecontrol u tisgivenby ( x t 1 x t) T,with x t 1 = ( x ȳ θ)and x t = ( x ȳ θ ). (5.33) bar indicates that these are odometry measurements embedded in a robotinternal coordinate whose relation to the global world coordinates is unknown. The key insight for utilizing this information in state estimation is thattherelativedifferencebetween x t 1 and x t,underanappropriatedefinitionoftheterm difference, isagoodestimatorforthedifferenceofthe trueposes x t 1 and x t.themotioninformation u t is,thus,givenbythepair ( ) xt 1 u t = x t Toextractrelativeodometry, u t istransformedintoasequenceofthreesteps: a rotation, followed by a straight line motion(translation), and another rotation. Figure 5.7 illustrates this decomposition: the initial turn is called δ rot1,thetranslation δ trans,andthesecondrotation δ rot2. Asthereader easilyverifies,eachpairofpositions ( s s )hasauniqueparametervector
14 5.4 Odometry Motion Model 135 (a) (b) (c) Figure 5.8 The odometry motion model, for different noise parameter settings. (δ rot1 δ trans δ rot2 ) T,andtheseparametersaresufficienttoreconstructthe relativemotionbetween sand s.thus, δ rot1,δ trans,δ rot2 formtogetherasufficient statistics of the relative motion encoded by the odometry. The probabilistic motion model assumes that these three parameters are corrupted by independent noise. The reader may note that odometry motion uses one more parameter than the velocity vector defined in the previous section,forwhichreasonwewillnotfacethesamedegeneracythatledto the definition of a final rotation. Before delving into mathematical detail, let us state the basic algorithm for calculating this density in closed form. Table 5.5 depicts the algorithm forcomputing p(x t u t,x t 1 )fromodometry.thisalgorithmacceptsasan inputaninitialpose x t 1,apairofposes u t = ( x t 1 x t ) T obtainedfromthe robot sodometry,andahypothesizedfinalpose x t.itoutputsthenumerical probability p(x t u t,x t 1 ). Lines 2 to 4 in Table 5.5 recover relative motion parameters (δ rot1 δ trans δ rot2 ) T fromtheodometryreadings. Asbefore, theyimplement an inverse motion model. The corresponding relative motion parameters (ˆδ rot1 ˆδtrans ˆδrot2 ) T forthegivenposes x t 1 and x t arecalculatedinlines5through7ofthisalgorithm. Lines8to10computethe error probabilities for the individual motion parameters. As above, the functionprob(a, b 2 )implementsanerrordistributionover awithzero meanand variance b 2.Heretheimplementermustobservethatallangular differencesmustliein [ π,π].hencetheoutcomeof δ rot2 δ rot2 hastobe truncated correspondingly a common error that tends to be difficult to debug. Finally, line 11 returns the combined error probability, obtained by multiplyingtheindividualerrorprobabilities p 1, p 2,and p 3. Thislaststep
15 136 5 RobotMotion 1: Algorithmsample_motion_model_odometry(u t,x t 1 ): 2: δ rot1 = atan2(ȳ ȳ, x x) θ 3: δ trans = ( x x ) 2 + (ȳ ȳ ) 2 4: δ rot2 = θ θ δ rot1 5: ˆδrot1 = δ rot1 sample(α 1 δ 2 rot1 + α 2 δ 2 trans ) 6: ˆδtrans = δ trans sample(α 3 δ 2 trans + α 4 δ 2 rot1 + α 4 δ 2 rot2 ) 7: ˆδrot2 = δ rot2 sample(α 1 δ 2 rot2 + α 2 δ 2 trans ) 8: x = x + ˆδ trans cos(θ + ˆδ rot1 ) 9: y = y + ˆδ trans sin(θ + ˆδ rot1 ) 10: θ = θ + ˆδ rot1 + ˆδ rot2 11: return x t = (x,y,θ ) T Table5.6 Algorithmforsamplingfrom p(x t u t, x t 1)basedonodometryinformation.Heretheposeattime tisrepresentedby x t 1 = (x y θ) T.Thecontrolisadifferentiablesetoftwoposeestimatesobtainedbytherobot sodometer, u t = ( x t 1 x t) T, with x t 1 = ( x ȳ θ)and x t = ( x ȳ θ ). assumes independence between the different error sources. The variables α 1 through α 4 arerobot-specificparametersthatspecifythenoiseinrobot motion. Figure 5.8 shows examples of our odometry motion model for different valuesoftheerrorparameters α 1 to α 4.ThedistributioninFigure5.8aisa typical one, whereas the ones shown in Figures 5.8b and 5.8c indicate unusually large translational and rotational errors, respectively. The reader maywanttocarefullycomparethesediagramswiththoseinfigure5.3on page 122. The smaller the time between two consecutive measurements, the more similar those different motion models. Thus, if the belief is updated frequently e.g., every tenth of a second for a conventional indoor robot, the difference between these motion models is not very significant.
16 5.4 Odometry Motion Model 137 (a) (b) (c) Figure 5.9 Sampling from the odometry motion model, using the same parameters as in Figure 5.8. Each diagram shows 500 samples Sampling Algorithm Ifparticlefiltersareusedforlocalization,wewouldalsoliketohaveanalgorithmforsamplingfrom p(x t u t,x t 1 ).Recallthatparticlefilters(Chapter4.3)requiresamplesof p(x t u t,x t 1 ),ratherthanaclosed-formexpressionforcomputing p(x t u t,x t 1 )forany x t 1, u t,and x t. Thealgorithm sample_motion_model_odometry, shown in Table 5.6, implements the samplingapproach.itacceptsaninitialpose x t 1 andanodometryreading u t asinput,andoutputsarandom x t distributedaccordingto p(x t u t,x t 1 ). Itdiffersfromthepreviousalgorithminthatitrandomlyguessesapose x t (lines5-10),insteadofcomputingtheprobabilityofagiven x t. Asbefore, the sampling algorithm sample_motion_model_odometry is somewhat easier to implement than the closed-form algorithm motion_model_odometry, since it side-steps the need for an inverse model. Figure 5.9 shows examples of sample sets generated by sample_motion_model_odometry, using the same parameters as in the model shown in Figure 5.8. Figure 5.10 illustrates the motion model in action by superimposing sample sets from multiple time steps. This data has been generated using the motion update equations of the algorithm particle_filter (Table 4.3), assuming the robot s odometry follows the path indicated by the solid line. The figure illustrates how the uncertainty grows as the robot moves. The samples are spread across an increasingly large space Mathematical Derivation of the Odometry Motion Model The derivation of the algorithms is relatively straightforward, and once again may be skipped at first reading. To derive a probabilistic motion model using
17 138 5 RobotMotion Start location 10 meters Figure 5.10 Sampling approximation of the position belief for a non-sensing robot. The solid line displays the actions, and the samples represent the robot s belief at different points in time. odometry, we recall that the relative difference between any two poses is represented by a concatenation of three basic motions: a rotation, a straight-line motion(translation), and another rotation. The following equations show howtocalculatethevaluesofthetworotationsandthetranslationfromthe odometryreading u t = ( x t 1 x t ) T,with x t 1 = ( x ȳ θ)and x t = ( x ȳ θ ): (5.34) (5.35) (5.36) δ rot1 = atan2(ȳ ȳ, x x) θ δ trans = ( x x ) 2 + (ȳ ȳ ) 2 δ rot2 = θ θ δ rot1 Tomodelthemotionerror,weassumethatthe true valuesoftherotation and translation are obtained from the measured ones by subtracting inde-
18 5.4 Odometry Motion Model 139 pendentnoise ε b 2 withzeromeanand variance b 2 : (5.37) (5.38) (5.39) (5.40) ˆδ rot1 = δ rot1 ε α1δ 2 rot1 +α2δ2 trans ˆδ trans = δ trans ε α3 δ 2 trans +α4 δ2 rot1 +α4 δ2 rot2 ˆδ rot2 = δ rot2 ε α1δ 2 rot2 +α2δ2 trans As in the previous section, ε b 2 is a zero-mean noise variable with variance b 2. Theparameters α 1 to α 4 arerobot-specificerrorparameters, which specify the error accrued with motion. Consequently,thetrueposition, x t,isobtainedfrom x t 1 byaninitialrotationwithangle ˆδ rot1,followedbyatranslationwithdistance ˆδ trans,followed byanotherrotationwithangle ˆδ rot2.thus, x y θ = x y θ + ˆδ trans cos(θ + ˆδ rot1 ) ˆδ trans sin(θ + ˆδ rot1 ) ˆδ rot1 + ˆδ rot2 Notice that algorithm sample_motion_model_odometry implements Equations(5.34) through(5.40). The algorithm motion_model_odometry is obtained by noticing that lines 5-7computethemotionparameters ˆδ rot1, ˆδ trans,and ˆδ rot2 forthehypothesizedpose x t,relativetotheinitialpose x t 1.Thedifferenceofboth, (5.41) (5.42) (5.43) (5.44) (5.45) (5.46) δ rot1 ˆδ rot1 δ trans ˆδ trans δ rot2 ˆδ rot2 istheerrorinodometry,assumingofcoursethat x t isthetruefinalpose.the error model(5.37) to(5.39) implies that the probability of these errors is given by p 1 = ε α1δ 2 rot1 +α2δ2 trans (δ rot1 ˆδ rot1 ) p 2 = ε α3 δ 2 trans +α4 δ2 rot1 +α4 δ2 rot2 (δ trans ˆδ trans ) p 3 = ε α1δ 2 rot2 +α2δ2 trans (δ rot2 ˆδ rot2 ) with the distributions ε defined as above. These probabilities are computed in lines 8-10 of our algorithm motion_model_odometry, and since the errors areassumedtobeindependent,thejointerrorprobabilityistheproduct p 1 p 2 p 3 (c.f.,line11).
Introduction to Mobile Robotics Probabilistic Motion Models
Introduction to Mobile Robotics Probabilistic Motion Models Wolfram Burgard 1 Robot Motion Robot motion is inherently uncertain. How can we model this uncertainty? 2 Dynamic Bayesian Network for Controls,
More informationMotion Models (cont) 1 2/15/2012
Motion Models (cont 1 Odometry Motion Model the key to computing p( xt ut, xt 1 for the odometry motion model is to remember that the robot has an internal estimate of its pose θ x t 1 x y θ θ true poses
More informationLocalization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman
Localization Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman Localization General robotic task Where am I? Techniques generalize to many estimation tasks System parameter
More informationModeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems
Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities
More informationVectors a vector is a quantity that has both a magnitude (size) and a direction
Vectors In physics, a vector is a quantity that has both a magnitude (size) and a direction. Familiar examples of vectors include velocity, force, and electric field. For any applications beyond one dimension,
More informationRao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors
Rao-Blackwellized Particle Filtering for 6-DOF Estimation of Attitude and Position via GPS and Inertial Sensors GRASP Laboratory University of Pennsylvania June 6, 06 Outline Motivation Motivation 3 Problem
More informationLocal Probabilistic Models: Continuous Variable CPDs
Local Probabilistic Models: Continuous Variable CPDs Sargur srihari@cedar.buffalo.edu 1 Topics 1. Simple discretizing loses continuity 2. Continuous Variable CPDs 3. Linear Gaussian Model Example of car
More information1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment
Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationthe robot in its current estimated position and orientation (also include a point at the reference point of the robot)
CSCI 4190 Introduction to Robotic Algorithms, Spring 006 Assignment : out February 13, due February 3 and March Localization and the extended Kalman filter In this assignment, you will write a program
More information1 Kalman Filter Introduction
1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation
More informationIntroduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationLocalización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación
Universidad Pública de Navarra 13 de Noviembre de 2008 Departamento de Ingeniería Mecánica, Energética y de Materiales Localización Dinámica de Robots Móviles Basada en Filtrado de Kalman y Triangulación
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic
More informationIntroduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the
More informationConsistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements
Seminar on Mechanical Robotic Systems Centre for Intelligent Machines McGill University Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements Josep M. Font Llagunes
More informationMobile Robots Localization
Mobile Robots Localization Institute for Software Technology 1 Today s Agenda Motivation for Localization Odometry Odometry Calibration Error Model 2 Robotics is Easy control behavior perception modelling
More informationE190Q Lecture 11 Autonomous Robot Navigation
E190Q Lecture 11 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 013 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator
More informationDate: 1 April (1) The only reference material you may use is one 8½x11 crib sheet and a calculator.
PH1140: Oscillations and Waves Name: Solutions Conference: Date: 1 April 2005 EXAM #1: D2005 INSTRUCTIONS: (1) The only reference material you may use is one 8½x11 crib sheet and a calculator. (2) Show
More informationDate: 31 March (1) The only reference material you may use is one 8½x11 crib sheet and a calculator.
PH1140: Oscillations and Waves Name: SOLUTIONS AT END Conference: Date: 31 March 2005 EXAM #1: D2006 INSTRUCTIONS: (1) The only reference material you may use is one 8½x11 crib sheet and a calculator.
More informationEXPERIMENTAL ANALYSIS OF COLLECTIVE CIRCULAR MOTION FOR MULTI-VEHICLE SYSTEMS. N. Ceccarelli, M. Di Marco, A. Garulli, A.
EXPERIMENTAL ANALYSIS OF COLLECTIVE CIRCULAR MOTION FOR MULTI-VEHICLE SYSTEMS N. Ceccarelli, M. Di Marco, A. Garulli, A. Giannitrapani DII - Dipartimento di Ingegneria dell Informazione Università di Siena
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationCourse Notes Math 275 Boise State University. Shari Ultman
Course Notes Math 275 Boise State University Shari Ultman Fall 2017 Contents 1 Vectors 1 1.1 Introduction to 3-Space & Vectors.............. 3 1.2 Working With Vectors.................... 7 1.3 Introduction
More informationParticle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates
More informationE190Q Lecture 10 Autonomous Robot Navigation
E190Q Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2015 1 Figures courtesy of Siegwart & Nourbakhsh Kilobots 2 https://www.youtube.com/watch?v=2ialuwgafd0 Control Structures
More informationSheet 5 solutions. August 15, 2017
Sheet 5 solutions August 15, 2017 Exercise 1: Sampling Implement three functions in Python which generate samples of a normal distribution N (µ, σ 2 ). The input parameters of these functions should be
More informationVlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook
More informationBayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.
Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More information2D Image Processing (Extended) Kalman and particle filter
2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz
More informationProbabilistic Robotics
University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: http://www.dis.uniroma1.it/~grisetti/teaching/probabilistic_ro
More informationarxiv: v4 [quant-ph] 26 Oct 2017
Hidden Variable Theory of a Single World from Many-Worlds Quantum Mechanics Don Weingarten donweingarten@hotmail.com We propose a method for finding an initial state vector which by ordinary Hamiltonian
More informationStatistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes
Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes Lecturer: Drew Bagnell Scribe: Venkatraman Narayanan 1, M. Koval and P. Parashar 1 Applications of Gaussian
More informationENGI Linear Approximation (2) Page Linear Approximation to a System of Non-Linear ODEs (2)
ENGI 940 4.06 - Linear Approximation () Page 4. 4.06 Linear Approximation to a System of Non-Linear ODEs () From sections 4.0 and 4.0, the non-linear system dx dy = x = P( x, y), = y = Q( x, y) () with
More informationExtremal Trajectories for Bounded Velocity Differential Drive Robots
Extremal Trajectories for Bounded Velocity Differential Drive Robots Devin J. Balkcom Matthew T. Mason Robotics Institute and Computer Science Department Carnegie Mellon University Pittsburgh PA 523 Abstract
More informationStress, Strain, Mohr s Circle
Stress, Strain, Mohr s Circle The fundamental quantities in solid mechanics are stresses and strains. In accordance with the continuum mechanics assumption, the molecular structure of materials is neglected
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationL03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms
L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic
More informationAOL Spring Wavefront Sensing. Figure 1: Principle of operation of the Shack-Hartmann wavefront sensor
AOL Spring Wavefront Sensing The Shack Hartmann Wavefront Sensor system provides accurate, high-speed measurements of the wavefront shape and intensity distribution of beams by analyzing the location and
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationManipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots
Manipulators P obotics Configuration of robot specified by 6 numbers 6 degrees of freedom (DOF) 6 is the minimum number required to position end-effector arbitrarily. For dynamical systems, add velocity
More informationParticle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007
Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking A ballistic projectile has been launched in our direction and may or may not land near enough to
More informationVlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems
1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Uncertainty representation Localization Chapter 5 (textbook) What is the course about?
More informationSampling Based Filters
Statistical Techniques in Robotics (16-831, F12) Lecture#03 (Monday September 5) Sampling Based Filters Lecturer: Drew Bagnell Scribes: M. Taylor, M. Shomin 1 1 Beam-based Sensor Model Many common sensors
More informationComparision of Probabilistic Navigation methods for a Swimming Robot
Comparision of Probabilistic Navigation methods for a Swimming Robot Anwar Ahmad Quraishi Semester Project, Autumn 2013 Supervisor: Yannic Morel BioRobotics Laboratory Headed by Prof. Aue Jan Ijspeert
More informationControl of Mobile Robots
Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationUniversität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters
More informationIntro Vectors 2D implicit curves 2D parametric curves. Graphics 2011/2012, 4th quarter. Lecture 2: vectors, curves, and surfaces
Lecture 2, curves, and surfaces Organizational remarks Tutorials: Tutorial 1 will be online later today TA sessions for questions start next week Practicals: Exams: Make sure to find a team partner very
More informationEstimation of Poses with Particle Filters
Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU
More informationEEE 187: Take Home Test #2
EEE 187: Take Home Test #2 Date: 11/30/2017 Due : 12/06/2017 at 5pm 1 Please read. Two versions of the exam are proposed. You need to solve one only. Version A: Four problems, Python is required for some
More informationRecursive Estimation
Recursive Estimation Raffaello D Andrea Spring 08 Problem Set : Bayes Theorem and Bayesian Tracking Last updated: March, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables,
More informationMiscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier
Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationReview: Statistical Model
Review: Statistical Model { f θ :θ Ω} A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced the data. The statistical model
More information(arrows denote positive direction)
12 Chapter 12 12.1 3-dimensional Coordinate System The 3-dimensional coordinate system we use are coordinates on R 3. The coordinate is presented as a triple of numbers: (a,b,c). In the Cartesian coordinate
More informationThe Effect of Stale Ranging Data on Indoor 2-D Passive Localization
The Effect of Stale Ranging Data on Indoor 2-D Passive Localization Chen Xia and Lance C. Pérez Department of Electrical Engineering University of Nebraska-Lincoln, USA chenxia@mariner.unl.edu lperez@unl.edu
More informationSensors Fusion for Mobile Robotics localization. M. De Cecco - Robotics Perception and Action
Sensors Fusion for Mobile Robotics localization 1 Until now we ve presented the main principles and features of incremental and absolute (environment referred localization systems) could you summarize
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationAlgebraic. techniques1
techniques Algebraic An electrician, a bank worker, a plumber and so on all have tools of their trade. Without these tools, and a good working knowledge of how to use them, it would be impossible for them
More informationRobot Localization and Kalman Filters
Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003 Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with
More informationLecture Note 8: Inverse Kinematics
ECE5463: Introduction to Robotics Lecture Note 8: Inverse Kinematics Prof. Wei Zhang Department of Electrical and Computer Engineering Ohio State University Columbus, Ohio, USA Spring 2018 Lecture 8 (ECE5463
More informationM2A2 Problem Sheet 3 - Hamiltonian Mechanics
MA Problem Sheet 3 - Hamiltonian Mechanics. The particle in a cone. A particle slides under gravity, inside a smooth circular cone with a vertical axis, z = k x + y. Write down its Lagrangian in a) Cartesian,
More informationCIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions
CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the
More informationIntro Vectors 2D implicit curves 2D parametric curves. Graphics 2012/2013, 4th quarter. Lecture 2: vectors, curves, and surfaces
Lecture 2, curves, and surfaces Organizational remarks Tutorials: TA sessions for tutorial 1 start today Tutorial 2 will go online after lecture 3 Practicals: Make sure to find a team partner very soon
More informationRobot Localisation. Henrik I. Christensen. January 12, 2007
Robot Henrik I. Robotics and Intelligent Machines @ GT College of Computing Georgia Institute of Technology Atlanta, GA hic@cc.gatech.edu January 12, 2007 The Robot Structure Outline 1 2 3 4 Sum of 5 6
More informationParticle lter for mobile robot tracking and localisation
Particle lter for mobile robot tracking and localisation Tinne De Laet K.U.Leuven, Dept. Werktuigkunde 19 oktober 2005 Particle lter 1 Overview Goal Introduction Particle lter Simulations Particle lter
More informationSLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters
SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters Matthew Walter 1,2, Franz Hover 1, & John Leonard 1,2 Massachusetts Institute of Technology 1 Department of Mechanical Engineering
More informationProbability theory basics
Probability theory basics Michael Franke Basics of probability theory: axiomatic definition, interpretation, joint distributions, marginalization, conditional probability & Bayes rule. Random variables:
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationStatistical Techniques in Robotics (16-831, F12) Lecture#20 (Monday November 12) Gaussian Processes
Statistical Techniques in Robotics (6-83, F) Lecture# (Monday November ) Gaussian Processes Lecturer: Drew Bagnell Scribe: Venkatraman Narayanan Applications of Gaussian Processes (a) Inverse Kinematics
More informationẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.
4 Periodic Solutions We have shown that in the case of an autonomous equation the periodic solutions correspond with closed orbits in phase-space. Autonomous two-dimensional systems with phase-space R
More informationik () uk () Today s menu Last lecture Some definitions Repeatability of sensing elements
Last lecture Overview of the elements of measurement systems. Sensing elements. Signal conditioning elements. Signal processing elements. Data presentation elements. Static characteristics of measurement
More informationMathematical Formulation of Our Example
Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot
More informationChapter 13: Trigonometry Unit 1
Chapter 13: Trigonometry Unit 1 Lesson 1: Radian Measure Lesson 2: Coterminal Angles Lesson 3: Reference Angles Lesson 4: The Unit Circle Lesson 5: Trig Exact Values Lesson 6: Trig Exact Values, Radian
More informationRobotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.
Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Sensors
More informationData assimilation in high dimensions
Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge
More informationNotes on multivariable calculus
Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in
More informationRobot Control Basics CS 685
Robot Control Basics CS 685 Control basics Use some concepts from control theory to understand and learn how to control robots Control Theory general field studies control and understanding of behavior
More informationData assimilation in high dimensions
Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February
More informationData Fusion Kalman Filtering Self Localization
Data Fusion Kalman Filtering Self Localization Armando Jorge Sousa http://www.fe.up.pt/asousa asousa@fe.up.pt Faculty of Engineering, University of Porto, Portugal Department of Electrical and Computer
More informationLabor-Supply Shifts and Economic Fluctuations. Technical Appendix
Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January
More informationMachine Learning. Bayesian Regression & Classification. Marc Toussaint U Stuttgart
Machine Learning Bayesian Regression & Classification learning as inference, Bayesian Kernel Ridge regression & Gaussian Processes, Bayesian Kernel Logistic Regression & GP classification, Bayesian Neural
More informationState Estimation of Linear and Nonlinear Dynamic Systems
State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of
More informationPILCO: A Model-Based and Data-Efficient Approach to Policy Search
PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol
More informationREVIEW - Vectors. Vectors. Vector Algebra. Multiplication by a scalar
J. Peraire Dynamics 16.07 Fall 2004 Version 1.1 REVIEW - Vectors By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making
More informationGaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012
Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature
More informationSequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007
Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember
More information8 Example 1: The van der Pol oscillator (Strogatz Chapter 7)
8 Example 1: The van der Pol oscillator (Strogatz Chapter 7) So far we have seen some different possibilities of what can happen in two-dimensional systems (local and global attractors and bifurcations)
More informationScalar & Vector tutorial
Scalar & Vector tutorial scalar vector only magnitude, no direction both magnitude and direction 1-dimensional measurement of quantity not 1-dimensional time, mass, volume, speed temperature and so on
More informationWeek 3: Wheeled Kinematics AMR - Autonomous Mobile Robots
Week 3: Wheeled Kinematics AMR - Paul Furgale Margarita Chli, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart Wheeled Kinematics 1 AMRx Flipped Classroom A Matlab exercise is coming later
More informationTEST CODE: MMA (Objective type) 2015 SYLLABUS
TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,
More informationIntroduction. Chapter Points, Vectors and Coordinate Systems
Chapter 1 Introduction Computer aided geometric design (CAGD) concerns itself with the mathematical description of shape for use in computer graphics, manufacturing, or analysis. It draws upon the fields
More informationVectors & Coordinate Systems
Vectors & Coordinate Systems Antoine Lesage Landry and Francis Dawson September 7, 2017 Contents 1 Motivations & Definition 3 1.1 Scalar field.............................................. 3 1.2 Vector
More informationCBE 6333, R. Levicky 1. Orthogonal Curvilinear Coordinates
CBE 6333, R. Levicky 1 Orthogonal Curvilinear Coordinates Introduction. Rectangular Cartesian coordinates are convenient when solving problems in which the geometry of a problem is well described by the
More informationKinematics for a Three Wheeled Mobile Robot
Kinematics for a Three Wheeled Mobile Robot Randal W. Beard Updated: March 13, 214 1 Reference Frames and 2D Rotations î 1 y î 2 y w 1 y w w 2 y î 2 x w 2 x w 1 x î 1 x Figure 1: The vector w can be expressed
More informationCS491/691: Introduction to Aerial Robotics
CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of
More informationSampling Based Filters
Statistical Techniques in Robotics (16-831, F11) Lecture#03 (Wednesday September 7) Sampling Based Filters Lecturer: Drew Bagnell Scribe:Nate Wood 1 1 Monte Carlo Method As discussed in previous lectures,
More informationwith Application to Autonomous Vehicles
Nonlinear with Application to Autonomous Vehicles (Ph.D. Candidate) C. Silvestre (Supervisor) P. Oliveira (Co-supervisor) Institute for s and Robotics Instituto Superior Técnico Portugal January 2010 Presentation
More information