Robotics. Dynamics. Marc Toussaint U Stuttgart

Similar documents
Robotics. Dynamics. University of Stuttgart Winter 2018/19

Introduction to Robotics

Robotics & Automation. Lecture 25. Dynamics of Constrained Systems, Dynamic Control. John T. Wen. April 26, 2007

Robotics. Control Theory. Marc Toussaint U Stuttgart

q 1 F m d p q 2 Figure 1: An automated crane with the relevant kinematic and dynamic definitions.

Classical Mechanics Comprehensive Exam Solution

Physics 106a, Caltech 4 December, Lecture 18: Examples on Rigid Body Dynamics. Rotating rectangle. Heavy symmetric top

Case Study: The Pelican Prototype Robot

Multibody simulation

Dynamics. Basilio Bona. Semester 1, DAUIN Politecnico di Torino. B. Bona (DAUIN) Dynamics Semester 1, / 18

Introduction to Robotics

Lecture «Robot Dynamics»: Dynamics and Control

Artificial Intelligence & Neuro Cognitive Systems Fakultät für Informatik. Robot Dynamics. Dr.-Ing. John Nassour J.

In this section of notes, we look at the calculation of forces and torques for a manipulator in two settings:

2007 Problem Topic Comment 1 Kinematics Position-time equation Kinematics 7 2 Kinematics Velocity-time graph Dynamics 6 3 Kinematics Average velocity

Lecture Note 12: Dynamics of Open Chains: Lagrangian Formulation

Physics 8, Fall 2011, equation sheet work in progress

Lecture «Robot Dynamics»: Dynamics 2

Multibody simulation

Oscillations. Phys101 Lectures 28, 29. Key points: Simple Harmonic Motion (SHM) SHM Related to Uniform Circular Motion The Simple Pendulum

Lecture 9: Eigenvalues and Eigenvectors in Classical Mechanics (See Section 3.12 in Boas)

Chapter 14. Oscillations. Oscillations Introductory Terminology Simple Harmonic Motion:

Distance travelled time taken and if the particle is a distance s(t) along the x-axis, then its instantaneous speed is:

Lagrangian Dynamics: Generalized Coordinates and Forces

3 Space curvilinear motion, motion in non-inertial frames

Review: control, feedback, etc. Today s topic: state-space models of systems; linearization

Robotics. Path Planning. Marc Toussaint U Stuttgart

Dynamics. describe the relationship between the joint actuator torques and the motion of the structure important role for

Chaotic motion. Phys 420/580 Lecture 10

Physics 8, Fall 2013, equation sheet work in progress

Chaotic motion. Phys 750 Lecture 9

CP1 REVISION LECTURE 3 INTRODUCTION TO CLASSICAL MECHANICS. Prof. N. Harnew University of Oxford TT 2017

Articulated body dynamics

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod)

Video 8.1 Vijay Kumar. Property of University of Pennsylvania, Vijay Kumar

In the presence of viscous damping, a more generalized form of the Lagrange s equation of motion can be written as

Simple and Physical Pendulums Challenge Problem Solutions

Robot Control Basics CS 685

Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties

Robot Dynamics II: Trajectories & Motion

Manipulator Dynamics 2. Instructor: Jacob Rosen Advanced Robotic - MAE 263D - Department of Mechanical & Aerospace Engineering - UCLA

Dynamics and Time Integration. Computer Graphics CMU /15-662, Fall 2016

Lagrange s Equations of Motion and the Generalized Inertia

Chapter 12. Recall that when a spring is stretched a distance x, it will pull back with a force given by: F = -kx

Modeling and Experimentation: Compound Pendulum

Problem 1: Lagrangians and Conserved Quantities. Consider the following action for a particle of mass m moving in one dimension

1 Trajectory Generation

Advanced Robotic Manipulation

Midterm 3 Review (Ch 9-14)

Robotics. Kinematics. Marc Toussaint University of Stuttgart Winter 2017/18

Contents. Dynamics and control of mechanical systems. Focus on

Symmetries 2 - Rotations in Space

Classical Mechanics. FIG. 1. Figure for (a), (b) and (c). FIG. 2. Figure for (d) and (e).

Simple Car Dynamics. Outline. Claude Lacoursière HPC2N/VRlab, Umeå Universitet, Sweden, May 18, 2005

Rotation. Kinematics Rigid Bodies Kinetic Energy. Torque Rolling. featuring moments of Inertia

Trajectory-tracking control of a planar 3-RRR parallel manipulator

Assignments VIII and IX, PHYS 301 (Classical Mechanics) Spring 2014 Due 3/21/14 at start of class

Linearize a non-linear system at an appropriately chosen point to derive an LTI system with A, B,C, D matrices

Rotational motion problems

Simple Harmonic Motion

Oscillatory Motion. Simple pendulum: linear Hooke s Law restoring force for small angular deviations. small angle approximation. Oscillatory solution

Theoretical physics. Deterministic chaos in classical physics. Martin Scholtz

Chapter 14 Oscillations. Copyright 2009 Pearson Education, Inc.

Lecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202)

Columbia University Department of Physics QUALIFYING EXAMINATION

(W: 12:05-1:50, 50-N202)

Chapter 14 Periodic Motion

Oscillatory Motion. Simple pendulum: linear Hooke s Law restoring force for small angular deviations. Oscillatory solution

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

PHYSICS. Chapter 15 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT Pearson Education, Inc.

Damped harmonic motion

In most robotic applications the goal is to find a multi-body dynamics description formulated

Lecture 13 REVIEW. Physics 106 Spring What should we know? What should we know? Newton s Laws

P321(b), Assignement 1

General procedure for formulation of robot dynamics STEP 1 STEP 3. Module 9 : Robot Dynamics & controls

PHYSICS 221, FALL 2011 EXAM #2 SOLUTIONS WEDNESDAY, NOVEMBER 2, 2011

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582

7 Pendulum. Part II: More complicated situations

Lagrangian Dynamics: Derivations of Lagrange s Equations

Physics 6010, Fall Relevant Sections in Text: Introduction

Linearization problem. The simplest example

Final Exam Spring 2014 May 05, 2014

Lecture IV: Time Discretization

for changing independent variables. Most simply for a function f(x) the Legendre transformation f(x) B(s) takes the form B(s) = xs f(x) with s = df

L = 1 2 a(q) q2 V (q).

EE Homework 3 Due Date: 03 / 30 / Spring 2015

15. Hamiltonian Mechanics

Chapter 8: Momentum, Impulse, & Collisions. Newton s second law in terms of momentum:

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

MCE 366 System Dynamics, Spring Problem Set 2. Solutions to Set 2

Variation Principle in Mechanics

Consider a particle in 1D at position x(t), subject to a force F (x), so that mẍ = F (x). Define the kinetic energy to be.

06. Lagrangian Mechanics II

Chapter 15+ Revisit Oscillations and Simple Harmonic Motion

Chapter 14 Oscillations

Physics 106b/196b Problem Set 9 Due Jan 19, 2007

PSE Game Physics. Session (6) Angular momentum, microcollisions, damping. Oliver Meister, Roland Wittmann

Oscillations. Oscillations and Simple Harmonic Motion

Modeling and Simulation of the Nonlinear Computed Torque Control in Simulink/MATLAB for an Industrial Robot

Robot Manipulator Control. Hesheng Wang Dept. of Automation

Transcription:

Robotics Dynamics 1D point mass, damping & oscillation, PID, dynamics of mechanical systems, Euler-Lagrange equation, Newton-Euler recursion, general robot dynamics, joint space control, reference trajectory following, operational space control Marc Toussaint U Stuttgart

Kinematic Dynamic instantly change joint velocities q: instantly change joint torques u: δq t! = J (y φ(q t )) u! =? accounts for kinematic coupling of joints but ignores inertia, forces, torques gears, stiff, all of industrial robots accounts for dynamic coupling of joints and full Newtonian physics future robots, compliant, few research robots 2/36

When velocities cannot be changed/set arbitrarily Examples: An air plane flying: You cannot command it to hold still in the air, or to move straight up. A car: you cannot command it to move side-wards. Your arm: you cannot command it to throw a ball with arbitrary speed (force limits). A torque controlled robot: You cannot command it to instantly change velocity (infinite acceleration/torque). What all examples have in comment: One can set controls u t (air plane s control stick, car s steering wheel, your muscles activations, torque/voltage/current send to a robot s motors) But these controls only indirectly influence the dynamics of state, x t+1 = f(x t, u t ) 3/36

Dynamics The dynamics of a system describes how the controls u t influence the change-of-state of the system x t+1 = f(x t, u t ) The notation x t refers to the dynamic state of the system: e.g., joint positions and velocities x t = (q t, q t ). f is an arbitrary function, often smooth 4/36

Outline We start by discussing a 1D point mass for 3 reasons: The most basic force-controlled system with inertia We can introduce and understand PID control The behavior of a point mass under PID control is a reference that we can also follow with arbitrary dynamic robots (if the dynamics are known) We discuss computing the dynamics of general robotic systems Euler-Lagrange equations Euler-Newton method We derive the dynamic equivalent of inverse kinematics: operational space control 5/36

PID and a 1D point mass 6/36

The dynamics of a 1D point mass Start with simplest possible example: 1D point mass (no gravity, no friction, just a single mass) The state x(t) = (q(t), q(t)) is described by: position q(t) R velocity q(t) R The controls u(t) is the force we apply on the mass point The system dynamics is: q(t) = u(t)/m 7/36

1D point mass proportional feedback Assume current position is q. The goal is to move it to the position q. What can we do? 8/36

1D point mass proportional feedback Assume current position is q. The goal is to move it to the position q. What can we do? Idea 1: Always pull the mass towards the goal q : u = K p (q q) 8/36

1D point mass proportional feedback What s the effect of this control law? m q = u = K p (q q) q = q(t) is a function of time, this is a second order differential equation 9/36

1D point mass proportional feedback What s the effect of this control law? m q = u = K p (q q) q = q(t) is a function of time, this is a second order differential equation Solution: assume q(t) = a + be ωt (a non-imaginary alternative would be q(t) = a + b ɛ λt cos(ωt)) 9/36

1D point mass proportional feedback What s the effect of this control law? m q = u = K p (q q) q = q(t) is a function of time, this is a second order differential equation Solution: assume q(t) = a + be ωt (a non-imaginary alternative would be q(t) = a + b ɛ λt cos(ωt)) m b ω 2 e ωt = K p q K p a K p b e ωt (m b ω 2 + K p b) e ωt = K p (q a) (m b ω 2 + K p b) = 0 (q a) = 0 ω = i K p /m q(t) = q + b e i K p/m t This is an oscillation around q with amplitude b = q(0) q and frequency K p /m! 9/36

1D point mass proportional feedback m q = u = K p (q q) q(t) = q + b e i K p/m t Oscillation around q with amplitude b = q(0) q and frequency Kp /m 1 cos(x) 0.5 0-0.5-1 0 2 4 6 8 10 12 14 10/36

1D point mass derivative feedback Idea 2 Pull less, when we re heading the right direction already: Damp the system: u = K p (q q) + K d ( q q) q is a desired goal velocity For simplicity we set q = 0 in the following. 11/36

1D point mass derivative feedback What s the effect of this control law? m q = u = K p (q q) + K d (0 q) Solution: again assume q(t) = a + be ωt m b ω 2 e ωt = K p q K p a K p b e ωt K d b ωe ωt (m b ω 2 + K d b ω + K p b) e ωt = K p (q a) (m ω 2 + K d ω + K p ) = 0 (q a) = 0 ω = K d ± K 2 d 4mK p 2m q(t) = q + b e ω t The term K d 2m in ω is real exponential decay (damping) 12/36

1D point mass derivative feedback q(t) = q + b e ω t, ω = K d ± K 2 d 4mK p 2m Effect of the second term K 2 d 4mK p/2m in ω: K 2 d < 4mK p ω has imaginary part oscillating with frequency K p /m K 2 d /4m2 Kd 2 > 4mK p ω real strongly damped q(t) = q + be K d/2m t e i K p/m K 2 d /4m2 t Kd 2 = 4mK p second term zero only exponential decay 13/36

1D point mass derivative feedback illustration from O. Brock s lecture 14/36

1D point mass derivative feedback Alternative parameterization: Instead of the gains K p and K d it is sometimes more intuitive to set the wave length λ = 1 1 ω 0 =, K p = m/λ 2 Kp/m damping ratio ξ = ξ > 1: over-damped ξ = 1: critically dampled ξ < 1: oscillatory-damped K d = λk d 4mKp 2m, K d = 2mξ/λ q(t) = q + be ξ t/λ e i 1 ξ 2 t/λ 15/36

1D point mass integral feedback Idea 3 Pull if the position error accumulated large in the past: u = K p (q q) + K d ( q q) + K i t s=0 (q (s) q(s)) ds This is not a linear ODE w.r.t. x = (q, q). However, when we extend the state to x = (q, q, e) we have the ODE q = q q = u/m = K p/m(q q) + K d /m( q q) + K i /m e ė = q q (no explicit discussion here) 16/36

1D point mass PID control u = K p (q q) + K d ( q q) + K i t s=0 (q q(s)) ds PID control Proportional Control ( Position Control ) f K p (q q) Derivative Control ( Damping ) f K d ( q q) (ẋ = 0 damping) Integral Control ( Steady State Error ) f K i t s=0 (q (s) q(s)) ds 17/36

Controlling a 1D point mass lessons learnt Proportional and derivative feedback (PD control) are like adding a spring and damper to the point mass PD control is a linear control law (q, q) u = K p (q q) + K d ( q q) (linear in the dynamic system state x = (q, q)) With such linear control laws we can design approach trajectories (by tuning the gains) but no optimality principle behind such motions 18/36

Dynamics of mechanical systems 19/36

Two ways to derive dynamics equations for mechanical systems The Euler-Lagrange equation d L dt q L q = u Used when you want to derive analytic equations of motion ( on paper ) The Newton-Euler recursion f i = m v i, (and related algorithms) u i = I i ẇ + w Iw Algorithms that propagate forces through a kinematic tree and numerically compute the inverse dynamics u = NE(q, q, q) or forward dynamics q = f(q, q, u). 20/36

The Euler-Lagrange equation d L dt q L q = u L(q, q) is called Lagrangian and defined as L = T U where T =kinetic energy and U =potential energy. q is called generalized coordinate any coordinates such that (q, q) describes the state of the system. Joint angles in our case. u are external forces 21/36

The Euler-Lagrange equation How is this typically done? First, describe the kinematics and Jacobians for every link i: (q, q) {T W i(q), v i, w i} Recall T W i (q) = T W A T A A (q) T A B T B B (q) Further, we know that a link s velocity v i = J i q can be described via its position Jacobian. Similarly we can describe the link s angular velocity w i = Ji w q as linear in q. Second, formulate the kinetic energy T = i 1 2 miv2 i + 1 2 w i I iw i = i 1 2 q M i q, M i = J i Ji w m ii 3 0 0 I i where I i = R i Ī ir i and Īi the inertia tensor in link coordinates Third, formulate the potential energies (typically independent of q) U = gm iheight(i) Fourth, compute the partial derivatives analytically to get something like }{{} u = d L dt q L q = }{{} M control inertia q + Ṁ q T q } {{ } Coriolis + U q }{{} gravity J i Ji w which relates accelerations q to the forces 22/36

Example: A pendulum Generalized coordinates: angle q = (θ) Kinematics: velocity of the mass: v = (l θ cos θ, 0, l θ sin θ) angular velocity of the mass: w = (0, θ, 0) Energies: T = 1 2 mv2 + 1 2 w Iw = 1 2 (ml2 + I 2 ) θ 2, U = mgl cos θ Euler-Lagrange equation: u = d L dt q L q = d dt (ml2 + I 2 ) θ + mgl sin θ = (ml 2 + I 2 ) θ + mgl sin θ 23/36

Newton-Euler recursion An algorithms that compute the inverse dynamics u = NE(q, q, q ) by recursively computing force balance at each joint: Newton s equation expresses the force acting at the center of mass for an accelerated body: f i = m v i Euler s equation expresses the torque (=control!) acting on a rigid body given an angular velocity and angular acceleration: u i = I iẇ + w Iw Forward recursion: ( kinematics) Compute (angular) velocities (v i, w i ) and accelerations ( v i, ẇ i ) for every link (via forward propagation; see geometry notes for details) Backward recursion: For the leaf links, we now know the desired accelerations q and can compute the necessary joint torques. Recurse backward. 24/36

Numeric algorithms for forward and inverse dynamics Newton-Euler recursion: very fast (O(n)) method to compute inverse dynamics u = NE(q, q, q ) Note that we can use this algorithm to also compute gravity terms: u = NE(q, 0, 0) = G(q) Coriolis terms: u = NE(q, q, 0) = C(q, q) q column of Intertia matrix: u = NE(q, 0, e i) = M(q) e i Articulated-Body-Dynamics: fast method (O(n)) to compute forward dynamics q = f(q, q, u) 25/36

Some last practical comments [demo] Use energy conservation to measure dynamic of physical simulation Physical simulation engines (developed for games): ODE (Open Dynamics Engine) Bullet (originally focussed on collision only) Physx (Nvidia) Differences of these engines to Lagrange, NE or ABD: Game engine can model much more: Contacts, tissues, particles, fog, etc (The way they model contacts looks ok but is somewhat fictional) On kinematic trees, NE or ABD are much more precise than game engines Game engines do not provide inverse dynamics, u = NE(q, q, q) Proper modelling of contacts is really really hard 26/36

Dynamic control of a robot 27/36

We previously learnt the effect of PID control on a 1D point mass Robots are not a 1D point mass Neither is each joint a 1D point mass Applying separate PD control in each joint neglects force coupling (Poor solution: Apply very high gains separately in each joint make joints stiff, as with gears.) However, knowing the robot dynamics we can transfer our understanding of PID control of a point mass to general systems 28/36

General robot dynamics Let (q, q) be the dynamic state and u R n the controls (typically joint torques in each motor) of a robot Robot dynamics can generally be written as: M(q) R n n C(q, q) R n G(q) R n M(q) q + C(q, q) q + G(q) = u is positive definite intertia matrix (can be inverted forward simulation of dynamics) are the centripetal and coriolis forces are the gravitational forces u are the joint torques (cf. to the Euler-Lagrange equation on slide 22) We often write more compactly: M(q) q + F (q, q) = u 29/36

Controlling a general robot From now on we jsut assume that we have algorithms to efficiently compute M(q) and F (q, q) for any (q, q) Inverse dynamics: If we know the desired q for each joint, gives the necessary torques u = M(q) q + F (q, q) Forward dynamics: If we know which torques u we apply, use q = M(q) -1 (u F (q, q)) to simulate the dynamics of the system (e.g., using Runge-Kutta) 30/36

Controlling a general robot joint space approach Where could we get the desired q from? Assume we have a nice smooth reference trajectory q ref 0:T (generated with some motion profile or alike), we can at each t read off the desired acceleration as q ref t := 1 τ [(q t+1 q t )/τ (q t q t-1 )/τ] = (q t-1 + q t+1 2q t )/τ 2 However, tiny errors in acceleration will accumulate greatly over time! This is Instable!! 31/36

Controlling a general robot joint space approach Where could we get the desired q from? Assume we have a nice smooth reference trajectory q ref 0:T (generated with some motion profile or alike), we can at each t read off the desired acceleration as q ref t := 1 τ [(q t+1 q t )/τ (q t q t-1 )/τ] = (q t-1 + q t+1 2q t )/τ 2 However, tiny errors in acceleration will accumulate greatly over time! This is Instable!! Choose a desired acceleration q t that implies a PD-like behavior around the reference trajectory! q t = q ref t + K p (q ref t q t ) + K d ( q ref t q t ) This is a standard and very convenient heuristic to track a reference trajectory when the robot dynamics are known: All joints will exactly behave like a 1D point particle around the reference trajectory! 31/36

Controlling a robot operational space approach Recall the inverse kinematics problem: We know the desired step δy (or velocity ẏ ) of the endeffector. Which step δq (or velocities q) should we make in the joints? Equivalent dynamic problem: We know how the desired acceleration ÿ of the endeffector. What controls u should we apply? 32/36

Operational space control Inverse kinematics: q = argmin q φ(q) y 2 C + q q 0 2 W Operational space control (one might call it Inverse task space dynamics ): u = argmin φ(q) ÿ 2 C + u 2 H u 33/36

Operational space control We can derive the optimum perfectly analogous to inverse kinematics We identify the minimum of a locally squared potential, using the local linearization (and approx. J = 0) We get φ(q) = d dt φ(q) d (J q + Jq) J q + 2J dt q = JM -1 (u F ) + 2J q u = T (ÿ 2 J q + T F ) with T = JM -1, (C T = H -1 T (T H -1 T ) -1 ) T = (T CT + H) -1 T C 34/36

Controlling a robot operational space approach Where could we get the desired ÿ from? Reference trajectory y0:t ref in operational space PD-like behavior in each operational space: ÿt = ÿt ref + K p (yt ref y t ) + K d (ẏt ref ẏ t ) illustration from O. Brock s lecture Operational space control: Let the system behave as if we could directly apply a 1D point mass behavior to the endeffector 35/36

Multiple tasks Recall trick last time: we defined a big kinematic map Φ(q) such that q = argmin q q q 0 2 W + Φ(q) 2 Works analogously in the dynamic case: u = argmin u u 2 H + Φ(q) 2 36/36