SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

Similar documents
Sturm-Liouville Theory

Euler equations for multiple integrals

Linear First-Order Equations

Kinematics of Rotations: A Summary

Linear ODEs. Types of systems. Linear ODEs. Definition (Linear ODE) Linear ODEs. Existence of solutions to linear IVPs.

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

1 Lecture 20: Implicit differentiation

Lecture 10: October 30, 2017

Diagonalization of Matrices Dr. E. Jacobs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Calculus of Variations

Tensors, Fields Pt. 1 and the Lie Bracket Pt. 1

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations

Linear Algebra- Review And Beyond. Lecture 3

1 Lecture 13: The derivative as a function.

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

II. First variation of functionals

Witten s Proof of Morse Inequalities

PDE Notes, Lecture #11

Math 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v

Euler Equations: derivation, basic invariants and formulae

Systems & Control Letters

Section The Chain Rule and Implicit Differentiation with Application on Derivative of Logarithm Functions

Table of Common Derivatives By David Abraham

Applications of the Wronskian to ordinary linear differential equations

Self-normalized Martingale Tail Inequality

4. Important theorems in quantum mechanics

Outline. MS121: IT Mathematics. Differentiation Rules for Differentiation: Part 1. Outline. Dublin City University 4 The Quotient Rule

UNDERSTANDING INTEGRATION

Chapter 2 Derivatives

Section 2.1 The Derivative and the Tangent Line Problem

Implicit Differentiation

Math Ordinary Differential Equations

ELEC3114 Control Systems 1

Vectors in two dimensions

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 2A

Physics 251 Results for Matrix Exponentials Spring 2017

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

1 Heisenberg Representation

Topic 2.3: The Geometry of Derivatives of Vector Functions

On the Inclined Curves in Galilean 4-Space

model considered before, but the prey obey logistic growth in the absence of predators. In

12.5. Differentiation of vectors. Introduction. Prerequisites. Learning Outcomes

x = c of N if the limit of f (x) = L and the right-handed limit lim f ( x)

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Permanent vs. Determinant

Ordinary Differential Equations: Homework 2

Section 7.2. The Calculus of Complex Functions

6 General properties of an autonomous system of two first order ODE

Math 1271 Solutions for Fall 2005 Final Exam

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lower bounds on Locality Sensitive Hashing

Define each term or concept.

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Math Implicit Differentiation. We have discovered (and proved) formulas for finding derivatives of functions like

Quantum Mechanics in Three Dimensions

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

First Order Linear Differential Equations

Agmon Kolmogorov Inequalities on l 2 (Z d )

SOME LYAPUNOV TYPE POSITIVE OPERATORS ON ORDERED BANACH SPACES

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 1b. Differential operators and orthogonal coordinates. Partial derivatives. Divergence and divergence theorem. Gradient. A y. + A y y dy. 1b.

Acute sets in Euclidean spaces

Calculus of Variations

Implicit Differentiation. Lecture 16.

Math Review for Physical Chemistry

The Exact Form and General Integrating Factors

Separation of Variables

Lecture 4 : General Logarithms and Exponentials. a x = e x ln a, a > 0.

Inverse Functions. Review from Last Time: The Derivative of y = ln x. [ln. Last time we saw that

Linear and quadratic approximation

Tractability results for weighted Banach spaces of smooth functions

Markov Chains in Continuous Time

The Ehrenfest Theorems

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

3.2 Differentiability

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

The Natural Logarithm

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

MATH2231-Differentiation (2)

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

Math 1B, lecture 8: Integration by parts

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule

Generalized Tractability for Multivariate Problems

A. Incorrect! The letter t does not appear in the expression of the given integral

ON ISENTROPIC APPROXIMATIONS FOR COMPRESSIBLE EULER EQUATIONS

d dx [xn ] = nx n 1. (1) dy dx = 4x4 1 = 4x 3. Theorem 1.3 (Derivative of a constant function). If f(x) = k and k is a constant, then f (x) = 0.

Darboux s theorem and symplectic geometry

Chapter 2 Lagrangian Modeling

θ x = f ( x,t) could be written as

2.20 Marine Hydrodynamics Lecture 3

REAL ANALYSIS I HOMEWORK 5

Mathematics 1210 PRACTICE EXAM II Fall 2018 ANSWER KEY

arxiv: v1 [math.ap] 17 Feb 2011

State-Space Model for a Multi-Machine System

Section 7.1: Integration by Parts

7.1 Support Vector Machine

Transcription:

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition x() = x. Here x R n an v is a function that maps R n into R n. We shall assume that for any two vectors x, x we v( x ) ( x ) L x x where L is some constant, usually calle the Lipschitz constant. An example is ( x) = A x where A is a constant real n n matrix. IWe compute A x A x = A( x x ) = ( x x ) A T A( x x ) λ ( x x ) where λ is the largest eigenvalue of A T A. The following is relatively easy to prove. Theorem.. The ifferential equation () has at most one solution that satisfies the given initial conition. Proof. Suppose there are two solutions x (t) an x (t) both satisfying x () = x () = x. Integrating we see that both solutions satisfy the equation x i (t) = x + v( x i (τ))τ, i =,. Hence, noting that the initial conition rops out, we get x (t) x (t) = v( x (τ))τ v( x (τ))τ = [ v( x (τ)) v( x (τ))]τ Using the Minkowski inequality which is essentially the triangle inequality we get x (t) x (t) an using the Lipschitz conition x (t) x (t) L v( x (τ)) v( x (τ)) τ x (τ)) x (τ) τ. an this hols for all t as long as the solutions exist. If t < T we have that x (t) x (t) L This inequality implies that for all t T that x (τ)) x (τ) τ L x (t) x (t) LT M(T ) T x (τ)) x (τ) τ

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA where we set M(T ) = max [,T ] x (t) x (t). Hence we also have that M(T ) LT M(T ) an if we choose T such that LT < it follows that M(T ) =. Hence the two solution coincie on the time interval [, T ]. Choosing x(t ) as the new initial conition the solution must coincie on the interval [T, T ] also an so on. We can argue the same way that for negative times the solutions have to coincie.. Some remarks about the e At Recall that we efine the exponential of a matrix e At by e At A n t n =. n! Here are some facts Theorem.. We have for all s, t R. n= e At e As = e A(t+s) Proof. Pick any initial conition x. The function x(t) = e A(t+s) x is a solution of the equation x = A x. This follows from t ea(t+s) = Ae A(t+s). Further the function y(t) = e At e As x is also a solution of the equation ẋ = A x. moreover, for t = we have that x() = e As x = y(). By uniqueness x((t) = y(t) an thus e At e As x = e A(t+s) x for all x. Since x is arbitrary this proves the theorem. An interesting consequence of this theorem is that e At is invertible for all t. e At e A( t) = e A(t t) = I. 3. One parameter families of matrices We say that a family of n n matrices P (t) is a one parameter family if an for all t, s R, P () = I P (t)p (s) = P (t + s). We shall only consier one parameter families that are ifferentiable. A particularly useful iea is to consier one parameter families of rotations R(φ). These are matrices that satisfy R(φ) T R(φ) = I. First we compute the erivative R(φ + ε) R(φ) R(φ) = lim φ ε ε R(ε) I = lim R(φ) = ΩR(φ) ε ε

where we enote SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA 3 R(ε) I Ω = lim ε ε The matrix Ω is not arbitrary. Inee, ifferentiating = φ R(). φ IRT (φ)r(φ) = φ I = an bu the prouct rule IR T (φ)r(φ) = Ω T + Ω φ φ= an we learn that Ω must be a skew symmetric matrix, Ω T = Ω. So far this worke in arbitrary imensions. We specialize to three imension an write the general skew symmetric matrix as Ω = ω 3 ω ω 3 ω ω ω note the interesting fact that Ω x = ω x. We also note that Ω ω =. Recall that we have the equation R (φ) = ΩR(φ) an this allows us to compute R(φ) explicitly. We shall assume that the vector ω is normalize. We have to compute e Ωφ Ω n φ n = n! Here are some computations: which can be written as Ω = n= ω ω 3 ω ω ω ω 3 ω ω ω 3 ω ω ω 3 ω 3 ω ω 3 ω ω ω 3 Ω = I + ω ω T. Here we use that ω is a unit vector. Thus we can start a little table: Thus it makes sense to split Ω, Ω = I + ω ω T, Ω 3 = Ω, Ω 4 = Ω... e Ωφ = m= into even an o powers. We have that Ω m φ m (m)! + m= Ω m+ = ( ) m Ω Ω m+ φ m+ (m + )!

4 SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA an hence the secon sum reuces to Ω m+ φ m+ = Ω (m + )! m= m= ( ) m φ m+ (m + )! For the even sum have to be careful noting that for m =,,... Ω m = ( ) m (I ω ω T ). For m = we have the ientity which we write an get that m= Ω m φ m (m)! I = I ω ω T + ω ω T = ω ω T + (I ω ω T ) which equals ω ω T + (I ω ω T ) cos φ. To summarize, we have shown that m= = Ω sin φ. ( ) m φ m (m)! e Ωφ = cos φi + ω ω T ( cos φ) + Ω sin φ Let s note a few things: The vector ω is an eigenvector for this matrix with eigenvalue. This is the axis of rotation. Take ω = i.e, the z axis. Then we get the matrix cos φ cos φ + sin φ = cos φ sin φ sin φ cos φ which is precisely a rotation in the positive irection by an angle φ. To summarize: Theorem 3.. The rotation about the ω axis by an angle φ is given by R(φ) = cos φi + ( cos φ) ω ω T + Ω sin φ, in particular R(φ) x = cos φ x + ( cos φ)( ω x) ω + sin φ( ω x). This is Euler s formula. Because Ω + I = ω ω T Euler s formula canals be written in the form R(φ) = cos φi + ( cos φ)(ω + I) + Ω sin φ = I + ( cos φ)ω + sin φω Note that the angle is any value between an π. If φ < we may replace φ by φ which keeps the sign of the cosine function fixe but changes the sign of the sign function. Thus if, aitionally we reverse the irection of ω we get back the same rotation. Neeless to say that the rotation by an angle φ = or φ = π is the ientity. Also note that in terms of R(φ) we have that [R(φ) + R(φ)T ] = cos φi + ( cos φ) ω ω T

an SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA 5 [R(φ) R(φ)T ] = Ω sin φ 4. A purely algebraic erivation of Euler s formula Our previous result concerns solution of the ifferential equation R (φ) = ΩR(φ). Suppose now that you are given an arbitrary rotation M. Can we fin φ an Ω so that M = I + ( cos φ)ω + sin φω? To be more specific we have the following theorem. Theorem 4.. Let M be a 3 3 rotation. Define an provie that φ, π, π. Then cos φ = TrM Ω = sin φ [M M T ] M = M = I + ( cos φ)ω + sin φω. For φ =, π we have that M = I an for φ = π M = I + Ω, an hence, Euler s formula hols in these cases as well. Recall that a 3 3 matrix M is a rotation if it satisfies M T M = I an etm = +. We woul like to show that there exist a unit vector ω an an angle φ, φ π such that As usual M = cos φi + ( cos φ) ω ω T + Ω sin φ. Ω = We first start with a simple Lemma: ω 3 ω ω 3 ω ω ω Lemma 4.. Let M be a rotation in three space, i.e., M T M = I an etm = +. Then the matrix M must have the eigenvalue. Moreover, the other two eigenvalues must be of the form e ±iφ for some φ π. Proof. To see this consier.. et(m I) = etm T et(m I) = etm T (M I) = et(i M T ) = et(i M) T = et(i M) = et(m I). Hence et(m I) = an is an eigenvalue. If we enote the other two eigenvalues by λ an λ we must have that λ + λ + = TrM an λ λ = (Why?) Hence λ + λ = TrM, λ λ =. The best way to solve these equations is to note that 3 TrM 3 (Why?) Hence we may efine cos φ = TrM,

6 SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA an we have to solve the equations λ + λ = cos φ, λ λ =. We easily fin that λ = e iφ an λ = e iφ. Thus we have the eigenvalues e iφ, e iφ,. Let us assume that φ, π, π. These cases we eal with later. Recall that cos φ = TrM, an efine Ω = sin φ [M M T ] Note that this suggests itself from Euler s formula (Why?). We have to check that Cayley s theorem tells us that an eveloping the proucts yiels Now M = I + ( cos φ)ω + sin φω =: R (M I)(M e iφ I)(M e iφ I) = M 3 ( + cos φ)m + ( + cos φ)m I =. I + ( cos φ)ω + sin φω = I + cos φ 4 sin φ [M M T ] + sin φ sin φ [M M T ] We further have that an by Cayley s theorem = I + 4( + cos φ) [M M T ] + [M M T ]. [M M T ] = M + M T I M = ( + cos φ)m ( + cos φ)i + M T, M T = ( + cos φ)m T ( + cos φ)i + M so that Thus, M + M T I = ( + cos φ)[m + M T ] 4( + cos φ)i R = [M + M T ] + [M M T ] = M. The remaining cases are easily ealt with. Assume that φ = or π. Then TrM = 3. Now the matrix M is of the form [ u, u, u 3 ] all of them being unit vectors. The trace, therefore is u + u + u 33 = 3 since each of these numbers is between an they all must be equals to. This means that the rotation matrix must be the ientity matrix. The case φ = π implies that must be a two fol eigenvalue. From this we get three facts: M = I an hence M = M T an M + I has a two imensional null space. Set P = M + I an note that P = P, P T = P

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA 7 Hence P projects the three imensional space onto a one imensional space an therefore it must be of the form P = ω ω T for some unit vector ω. Thus, which is what we wante to show. M = I + ω ω T = I + Ω