Numerical Methods for Inverse Kinematics

Similar documents
The Jacobian. Jesse van den Kieboom

COMP 558 lecture 18 Nov. 15, 2010

Matrices and Vectors

Linear Algebra and Robot Modeling

Mathematics for Graphics and Vision

Inverse Kinematics. Mike Bailey. Oregon State University. Inverse Kinematics

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Robotics I Kinematics, Dynamics and Control of Robotic Manipulators. Velocity Kinematics

Lecture Note 8: Inverse Kinematics

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Transformations. Chapter D Transformations Translation

Lecture Note 8: Inverse Kinematics

1 Linearity and Linear Systems

22A-2 SUMMER 2014 LECTURE Agenda

Dot Products, Transposes, and Orthogonal Projections

Where Is Newton Taking Us? And How Fast?

Dynamics of Ocean Structures Prof. Dr. Srinivasan Chandrasekaran Department of Ocean Engineering Indian Institute of Technology, Madras

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Tutorial 2 - Learning about the Discrete Fourier Transform

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Chapter 3 Numerical Methods

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Kinematic representation! Iterative methods! Optimization methods

Volume in n Dimensions

1 Review of simple harmonic oscillator

1 Review of the dot product

Vector, Matrix, and Tensor Derivatives

Getting Started with Communications Engineering

Bonus Section II: Solving Trigonometric Equations

Inverse Kinematics. Mike Bailey.

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Functions

A primer on matrices

Chapter 8. Rigid transformations

4.1 Distance and Length

Vector Geometry. Chapter 5

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Notes on multivariable calculus

MATH 320, WEEK 7: Matrices, Matrix Operations

I = i 0,

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Lecture 38: Equations of Rigid-Body Motion

Completeness, Eigenvalues, and the Calculus of Variations

Linear Algebra March 16, 2019

Singular Value Decomposition

MEAM 520. More Velocity Kinematics

Section 5.4 The Other Trigonometric Functions

Lecture 38: Equations of Rigid-Body Motion

SYDE 112, LECTURE 7: Integration by Parts

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Linear Algebra - Part II

Analytic Trigonometry. Copyright Cengage Learning. All rights reserved.

Lecture 2: Change of Basis

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Numerical Linear Algebra

Computing Neural Network Gradients

Basic Math for

Bindel, Fall 2013 Matrix Computations (CS 6210) Week 2: Friday, Sep 6

Topic 15 Notes Jeremy Orloff

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Eigenvalues and eigenvectors

Lecture 2: Review of Prerequisites. Table of contents

Feedback D. Incorrect! Exponential functions are continuous everywhere. Look for features like square roots or denominators that could be made 0.

Sum and difference formulae for sine and cosine. Elementary Functions. Consider angles α and β with α > β. These angles identify points on the

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

k is a product of elementary matrices.

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

Advanced Structural Analysis Prof. Devdas Menon Department of Civil Engineering Indian Institute of Technology, Madras

Chapter 06: Analytic Trigonometry

A Brief Outline of Math 355

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

CS 246 Review of Linear Algebra 01/17/19

Stat 206: Linear algebra

Midterm 1 Review. Distance = (x 1 x 0 ) 2 + (y 1 y 0 ) 2.

Sometimes the domains X and Z will be the same, so this might be written:

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Trajectory-tracking control of a planar 3-RRR parallel manipulator

MATH 341 MIDTERM 2. (a) [5 pts] Demonstrate that A and B are row equivalent by providing a sequence of row operations leading from A to B.

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

Tangent spaces, normals and extrema

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Information About Ellipses

Given U, V, x and θ perform the following steps: a) Find the rotation angle, φ, by which u 1 is rotated in relation to x 1

Logistic Regression. Will Monroe CS 109. Lecture Notes #22 August 14, 2017

Section 1.8/1.9. Linear Transformations

Math (P)Review Part I:

MITOCW MITRES2_002S10nonlinear_lec05_300k-mp4

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0,

11 a 12 a 13 a 21 a 22 a b 12 b 13 b 21 b 22 b b 11 a 12 + b 12 a 13 + b 13 a 21 + b 21 a 22 + b 22 a 23 + b 23

2t t dt.. So the distance is (t2 +6) 3/2

Visual SLAM Tutorial: Bundle Adjustment

Advanced Structural Analysis Prof. Devdas Menon Department of Civil Engineering Indian Institute of Technology, Madras

Transcription:

Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles. We will go through the steps of deriving a simple inverse kinematics problem. We are given a model in a starting state and a goal point to which we want to move the end of the arm, and we will solve for the new joint angles. In doing this we will use two areas of knowledge - trigonometric relationships to describe the model we re posing, and linearization to find successive approximations of our answer. 1 Problem Description We want to calculate the change in joint angles needed to bring the endpoint of our joint to the given position. Forward kinematics is concerned with the endpoint s position as you change the joint angles. Inverse Kinematics is concerned with the joint angles needed to produce a specific endpoint s position. A basic understanding of trigonometry should enable us to write down the forward trigonometric equations 1. There is a collection of possible joint angles, all of which bring the endpoint s position to the wanted goal. An analytical solution becomes extremely complicated. Thus, we will render this as a minimization problem, so that we can apply root-finding techniques to it. 1.1 Terminology DOF - Degrees of Freedom - a parameter we can control. In 3-space: 3 translations, 3 rotations. Joint - A connection between bodies with some DOF that modifies and propagates its transform to its outbound body. Inbound body - A body determining the transform of a given joint. Outbound body - A body whose orientation is determined by the given joint. Root Body - A body with a fixed position set by the global transform. Root Joint - the joint on the root body (thus, fixed position) with some DOF (generally 1 or 2 rotations). Linearization - Finding a linear approximation to a nonlinear function Root-finding - Finding the solution to f(x) = 0. 1 Section 2 1

2 Forward Kinematics Here we ll derive the forward kinematics equation for the arm of three joints in the following picture. We ll describe the endpoint in terms of the joint angles and lengths. I ll start by reminding you of the relationships y axis o p1 Theta 2 p2 Theta 1 l2 Theta 3 a s l1 l3 p3 Ω o x axis (a) 3-jointed arm (b) Right angle triangle between angles and sides found in the right angle triangle of Figure 1(b): s sin (Ω) = o (1) s cos (Ω) = a (2) Write down the equations for p 1x and p 1y (the x and y coordinate of the endpoint of the first joint) in terms of θ 1 and l 1. p 1x = p 1y = Write down the equation for p 2x and p 2y in terms of p 1, θ 2 and l 2. Think about the angles very carefully! p 2x = p 2y = Write down the equation for p 3x and p 3y in terms of p 2, θ 3 and l 3. Then expand it out to find p 3x and p 3y in terms of the three joint angles (θ 1, θ 2, θ 3 ) and body lengths (l 1, l 2, l 3 ). p 3x = p 3y = We can now describe the position of the endpoint of our chain in terms of the joint angles. Notice that we defined all our rotations as clockwise from the y-axis. This is arbitrary, but you need to be consistent. 2

3 Inverse Kinematics We want to find the set of joint angles that produce a specific end position. Assume you are given a configuration for your skeleton, and you want to move it to a new position. Thus, we want to compute the change in joint angles needed to produce the change in endpoint positon. From lecture (and the trigonometry on the previous page) you know that the analytic solution for p 3x and p 3y in terms of the three joint angles (θ 1, θ 2, θ 3 ) is hard and messy. We will circumvent this by finding a linear approximation to the change in position bought about by the change in joint angle. 3.1 Linearization and Jacobians Consider the single joint in the following figure: y axis p1 Theta 1 L1 Tangent path Actual path x axis Figure 1: Single 1 DOF joint. We plot the path of p 1 as it varies over θ 1, and we plot the tangent of the path at the current point of p 1. Notice how the tangent is a close approximation to the actual path in the region around p 1. Linearization is the process of approximating a nonlinear function by its tangent around a point. In section 2 we described the actual path. We now need to find the tangent path - a straight line varying with parameter θ. Since a tangent is just a derivative of a function at some point, calculate the derivative of p 1x and p 1y (from section 2) with respect to θ 1 (remember your trigonometric derivatives...): p 1x θ 1 = p 1y θ 1 = Since the tangent is a line, and an equation for a straight line from some point a is f(x) = m (x a x )+a y we can now write the function p 1 (x) that determines the position of p 1. Assume the joint is currently at an angle of θ 1 at point (p x, p y ) and we want to find pl 1 (θ), the linear approximation of p 1 (θ) around θ 1 : pl 1x (θ) = pl 1y (θ) = 3

Consider the two functions you just found. They describe the position of the joint as a linear function of the joint angles. In other words, we ve linearized the forward kinematics equation. What we did was write out the Taylor expansion for the function p 1x and p 1y around p 1 s current position. 3.1.1 The Jacobian matrix So far we ve been working with the endpoint s position p 1 as two coordinates, p 1x and p 1y. We can collapse these two coordinates into p 1 and define p 1 (θ) as a vector function - a function returning a vector. We just found a partial derivative for p 1x and p 1y - the change in each coordinate with respect to each joint angle. We will now introduce the Jacobian: Definition 1. The Jacobian matrix is a matrix of all the first order partial derivatives of a vector function. This means that the Jacobian matrix describe how each coordinate changes with respect to each joint angle in our system. You can think of it as a big function with two arguments - the joint angle and the coordinate - and it describes the first order linear approximation between these two arguments. Mathematically, we write this as: J ij = p i θ j p i = ith coordinate of endpoint p; θ j = jth joint angle. (3) The Jacobian matrix is extremely useful, since it describes the first order linear behavior of a system. In spring-mass simulations it is used for integration purposes. In IK it is used to successively approximate the joint angles needed for a given endpoint. Let s apply this to our original three-joint arm: y axis p1 Theta 2 p2 Theta 1 l2 Theta 3 l1 l3 p3 o x axis Figure 2: 3-jointed arm Then the Jacobian will be the following 2 by 3 matrix (2 coordinates, 3 joint angles). You should notice that you computed some of these terms in section 2 and section 3.1 and you know how to compute them all. ( ) ( ) p 3x p 3x p l1 cos(θ 1 ) + l 2 cos(θ 1 + θ 2 )+ l2 cos(θ 1 + θ 2 )+ 3x l J(θ) = θ 1 θ 2 θ 3 p 3y p 3y p = l 3 cos(θ 1 + θ 2 + θ 3 ) l 3 cos(θ 1 + θ 2 + θ 3 ) 3 cos(θ 1 + θ 2 + θ 3 ) ( ) ( ) 3y l1 sin(θ 1 ) l 2 sin(θ 1 + θ 2 ) l 2 sin(θ 1 + θ 2 ) l θ 1 θ 2 θ 3 l 3 sin(θ 1 + θ 2 + θ 3 ) l 3 sin(θ 1 + θ 2 + θ 3 ) 3 sin(θ 1 + θ 2 + θ 3 ) We can now rewrite out original taylor expansion of a coordinate around a point determined by the joint angles θ current (The last two equations in setion 3.1) in terms of the vector function p 1 (θ) and the Jacobian: p(θ) p linear (θ) = p ( θ current ) + J(θ current )(θ θ current ) (4) This is a linear vector function that approximates our forward kinematics equation. And since it is linear, we can employ all our linear algebra techniques to find its inverse. Congratulations, you linearized the kinematics problem. 4

3.2 Solving for Joint Angles - Pseudoinverses and Root Finding The original problem we set out to solve is described by figure 3.2: Goal Δ p p Figure 3: 3-jointed arm with goal position What must the change in joint angles be to achieve the change in endpoint position p? Look again at equation 4. Let s define p and θ in terms of both the quantities in figure 3.2 and the quantities of equation 4: θ = θ goal θ current (5) p = Goal p current (6) p linear (θ goal ) p current (7) Notice that I collapsed all the joint angles into one θ variable. We can do this by letting θ = (θ 1, θ 2 ). Thus, θ is a vector. Notice that in lecture professor O Brien followed a slightly different approach by defining θ as a linear combination of θ 1 and θ 2. Now rewrite equation 4 in terms of equations 5 and 7, and manipulate it so that you find an equation for θ: We just found an expression for the change in joint angles needed to move the endpoint from its current position to the goal position, in terms of quantities we can compute. But this still doesn t completely work! J(θ current ) in our example is not even a square matrix, thus there is no way to invert it. This is the same problem we originally had - there is a whole set of solutions that satisfy our goal position, thus the matrix is not invertible. Luckily we have ways of dealing with that for matrices. 3.3 How to solve this set of equations We re now entering the realm of higher level linear algebra, thus I will unfortunately not go through the derivations of how things work, but rather only propose a way to find the solution to θ. 5

You should have found the following equation on the previous page: θ = J(θ current ) 1 p But, as we noted, J(θ current ) 1 is almost never invertible, thus we have to compute a pseudoinverse. The pseudoinverse of a matrix has most but not necessarily all of the properties of an inverse, and it not necessarily unique. Since we are already taking a linear approximation of our p(θ) function, we can justify taking another approximation, since we have to account for these assumptions in our final solution in either case. (We will do this by successive iterations of this algorithm). 3.3.1 Calculating a Pseudoinverse using Singular Value Decomposition The Singular Value Decompositon is a technique that allows us to decompose any matrix into three matrices, two square orthogonal matrices and a possibly non-square diagonal matrix: T = USV T (8) There exists algorithms to compute the SVD decomposition of a matrix. decomposition, the pseudoinverse of T is defined as: Once you ve found the SVD T + = VS + U T (9) where S + is S with all its nonzero diagonal entries replaced with its corresponding reciprocals. Why does this work? Replacing the diagonals with its reciprocals is the same as inverting the matrix, but in the case that there are zeros on the diagonals, we can t take the reciprocals, so we calculate an approximate inverse by taking the reciprocals only of the nonzero terms. Since U and V are orthogonal, their transpose is their inverse. Thus the pseudo-inverse of T consists of the inverse of U, V and an approximate inverse of S. Thus we can compute a pseudoinverse of the jacobian, and we can solve for the change in joint angles needed to achieve the change in position we want. Except this change is still only a linear approximation. So, we ll turn it into a root-finding problem, which we ll explain in the last section of this worksheet. 3.3.2 Root-finding Root-finding is the numerical process by which we solve for x in the equation f(x) = 0. With some simple algebra, we can rearrange our terms so that our equations are in this form. Rewrite the equation you derived on page 5 in the form f(x) = 0: You can now apply any of the many root-finding techniques to it, but you will realize that Inverse Kinematics is a really nice problem in this case, and a simple iterative approach can work. I will justify this statement as follows: If we compute an approximate change in joint angles needed to achieve this change in position, we can simply move our system of joints and bodies to this new location and repeat the procedure we just went through. This is known as Newton s Method, and is a well-studied algorithm for finding roots to something of the form f(x) = 0. We implement it with successive iterations of finding the change in joint angles using the pseudoinverse of the jacobian. There is one more caveat - Newton s method does not always converge. 6

3.3.3 Convergence Newton s method does not always converge. Thus, calculating the approximate change in angle we need and simply applying that interactively does not necessarily bring us closer to the correct solution. We can amend this situation by implementing a binary search on the change in angle: 1. Calculate the new position the current change in θ would cause. 2. Calculate the distance from the goal. 3. If the distance decreased, take the step. 4. If the distance did not decrease, set the change in θ to be half the current change, and try again. 3.3.4 Minimization We can also cast everything we ve done as a minimization problem. We re attempting to minimize the error between the current position and the goal position ( θ = θ goal θ current ), and we re achieving that using a linear approximation of the function and applying root-finding techniques to it. This is a common method to solve these types of problems, and appears in numerical simulations in physics, math and engineering. We implicitly did exactly this, thus I mention it for completeness. 4 Summary and Overview We ve gone through a long derivation of a general numerical approach to solving the inverse kinematics problem. To summarize our approach, the following diagram demonstrates a simple IK solver: Move the system to a new position Calculate Jacobian Calculate new Joint Angles Check distance to goal no Have we reached the goal? yes Done Figure 4: A simple IK solver. Finally, the Jacobian is a very powerful tool. To extend this system to allow for multiple goals with multiple arms as one IK system, you simply expand your Jacobian to include all the joints. If you want to 7

constrain some joints not to move at all, you can simply reduce your jacobian not to include them. More advance techniques to implement constraints make use of Lagrangian Multipliers. Also, it is worth noting that our solution is a local solver. We make an approximation of the function that holds in the area around the current joint angles. 4.1 Real World IK Solvers Real World IK Solvers are not necessarily built in the same manner as this tutorial explains, but this is one of the accepted approaches. Others are particle-based IK and Cyclic Coordinate Descent. We did not cover one important facet though - real world IK solvers does includes a strong framework for setting constraints. You often want to use an IK solver to pose your model, but you want specific joints to have specific orientations - the modeler wants control over which list of joint angles is picked by the solver. For more information: http://billbaxter.com/courses/290/html/ 8