DIAGONALIZATION OF THE STRESS TENSOR

Similar documents
22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Eigen decomposition for 3D stress tensor

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Matrices and Deformation

3D Stress Tensors. 3D Stress Tensors, Eigenvalues and Rotations

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

Repeated Eigenvalues and Symmetric Matrices

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Elements of Continuum Elasticity. David M. Parks Mechanics and Materials II February 25, 2004

Eigenvalue and Eigenvector Homework

Math Matrix Algebra

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

That is, we start with a general matrix: And end with a simpler matrix:

Principal Component Analysis

Jordan Canonical Form Homework Solutions

A summary of matrices and matrix math

Linear Algebra: Characteristic Value Problem

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Matrix Vector Products

235 Final exam review questions

Math 108b: Notes on the Spectral Theorem

1 Inner Product and Orthogonality

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

TBP MATH33A Review Sheet. November 24, 2018

Quantum Computing Lecture 2. Review of Linear Algebra

Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Diagonalization of Matrix

CS 246 Review of Linear Algebra 01/17/19

Lecture 7: Positive Semidefinite Matrices

Lecture Notes 3

Systems of Algebraic Equations and Systems of Differential Equations

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Lecture 15, 16: Diagonalization

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math Camp Notes: Linear Algebra II

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

MAT 1302B Mathematical Methods II

Recall : Eigenvalues and Eigenvectors

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

MATH 221, Spring Homework 10 Solutions

Systems of Linear ODEs

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Methods for Solving Linear Systems Part 2

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Stress, Strain, Mohr s Circle

Introduction to Seismology Spring 2008

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Linear Algebra Primer

Linear Algebra: Matrix Eigenvalue Problems

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Review problems for MA 54, Fall 2004.

Math 205, Summer I, Week 4b:

Chapter 1. Continuum mechanics review. 1.1 Definitions and nomenclature

Definitions for Quizzes

UNIT 6: The singular value decomposition.

Diagonalization. P. Danziger. u B = A 1. B u S.

Chapter 3. Determinants and Eigenvalues

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

Properties of Linear Transformations from R n to R m

2. Matrix Algebra and Random Vectors

Linear Algebra Solutions 1

Multivariate Statistical Analysis

Review of Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

4. Linear transformations as a vector space 17

Linear Algebra Review

MTH 2032 SemesterII

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Linear Algebra Review. Vectors

Math 3191 Applied Linear Algebra

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Linear Algebra & Geometry why is linear algebra useful in computer vision?

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Practice problems for Exam 3 A =

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra Primer

The Singular Value Decomposition

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

1 9/5 Matrices, vectors, and their applications

19. Principal Stresses

and let s calculate the image of some vectors under the transformation T.

Lecture Summaries for Linear Algebra M51A

1 Last time: least-squares problems

Surface force on a volume element.

Notes on Linear Algebra and Matrix Theory

Lec 2: Mathematical Economics

12x + 18y = 30? ax + by = m

EIGENVALUES AND EIGENVECTORS 3

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MAT1302F Mathematical Methods II Lecture 19

LINEAR ALGEBRA SUMMARY SHEET.

Transcription:

DIAGONALIZATION OF THE STRESS TENSOR INTRODUCTION By the use of Cauchy s theorem we are able to reduce the number of stress components in the stress tensor to only nine values. An additional simplification of the stress can be achieved through diagonalization of the stress tensor. Diagonalization of the stress tensor reduces the number of components to only three. Many square matrices can be diagonalized. In this simplified (diagonalized) version of the stress tensor, the principal planes have no stress along them and the principal axes are the only directions along which we have any stress. The principal axes directions are orthogonal to the principal planes. That is, we start with a general matrix: σ11 σ12 σ13 σij = σ21 σ22 σ23 σ31 σ32 σ 33 And end with a simpler matrix: σ ij σ 0 0 11 = 0 σ22 0 0 0 σ 33 The last matrix has been diagonalized. The resultant matrix is easier to handle. For example, the square of the diagonalized matrix is simply the square of each of the components. Recall Cauchy s Theorem that states: T = σ n i ji i Where T is the traction/stress vector at a point on a plane with normal vector n T the stress tensor is symmetric. Definition A square matrix, M, can become diagonalized into another matrix, D by deriving C, so that

-1 D = C MC In the case that M is the stress tensor, D becomes a description of the same stress field from the perspective of a new, rotated co-ordinate system. From the point of view of this new stress matrix M is the stress described in the old co-ordinate system. So, in different words diagonalization gives the components of stress in a new, rotated coordinate system. That is, we start with a general matrix: σ σ σ σ σ σ σ σ σ σ 11 12 13 ij = 21 22 23 31 32 33 and end with a simpler matrix: σ ij σ 0 0 11 = 0 σ22 0 0 0 σ 33 Components are not the same before and after the diagonalization. The variable name is the same but the values are different. When a matrix is diagonalizable, it means that the three basis vectors in the new Cartesian coordinate system are parallel (in the general case) to three non-basis traction vectors in the old coordinate system. These special three vectors formerly in the old coordinate system have new components in the new coordinate system. These new components are the rows of the new (diagonalized) stress tensor. Diagonalization requires us to find these vectors. Eigenvalues and Eigenvectors In other words, this means that: ˆx 1 U and ˆx 2 V and ˆx 3 W where the basis vectors are in the new coordinate system (primes) and are parallel to vectors that were formerly in the old coordinate system. The new basis vectors and their corresponding former selves are called eigenvectors. We can express this another way: U = µ 1 U,

V = µ 2V, W = µ 3 W where µ 1, µ 1, and µ 3, are constants, known as eigenvalues. Let s take a simple example and consider only vectors in a 2-coordinate system that are acted upon by some matrix that we wish to diagonalize (our target). U 5 2 U 1 1 = U 2 2 U2 2 U 1 U1 = µ 1 2 U U 2 Also, The simultaneous solution to this problem is given by Cramer s Rule: Now, So, 5U1 2U2 = µ 1U1 2U + 2U = µ U 1 2 1 1 ( µ ) U U ( µ ) 5 1 1 2U2 = 0 2 + 2 U = 0 1 1 2 5 µ 1 2 = 0 2 2 µ 1 to obtain solutions to U 1 This is also known as the characteristic equation of matrix M You should work out that µ 1 can have two values: 6 or 1 When µ 1 =1 then we have 5 1 2 = 0 2 2 1 These are

DIAGONALIZATION OF THE STRAIN TENSOR, AN EXAMPLE We will look at the diagonalization of strain instead of the case of stress as this will lead us in to the next section. Version of 25 June 2012 Corrections and Suggestions to Vince_Cronin@baylor.edu How to find the eigenvalues and eigenvectors of a symmetric 2x2 matrix Introduction We will leave the theoretical development of eigensystems for you to read in textbooks on linear algebra or tensor mathematics, or from reliable sources on the web such as those listed in the references section at the end of this document. Here, we accept that for any given stress or strain tensor, a coordinate system can be identified in which all of the shear stresses or shear strains have zero value, and the only non-zero values are along the diagonal of the tensor matrix. The eigenvectors are parallel to those special coordinate axes, and the eigenvalues are the values along the diagonal. Another way of characterizing them is that the eigenvectors are along the principal directions of the stress or strain ellipsoids, and the eigenvalues are the magnitudes of the principal stresses or strains. Let s call the square matrix we are analyzing matrix M. M = We want to find all possible values for a variable we will call λ that satisfy the following characteristic equation: det(m λi) = 0, or using another common symbol for determinant, = 0, Matrix I in the preceding equations is the identity matrix

(I = if M is a 2x2 matrix) and variable λ is an eigenvalue. Then (M λi) = The characteristic equation is This determinant can be unpacked as the equation or, simplifying, ((d λ)(g λ)) (f e) = 0 λ 2 ((d +g ) λ ) + ((d g ) (f e)) = 0. The result is a quadratic equation. You might remember how to solve a quadratic equation from deep in your childhood. In case you don t, the general solution for the quadratic equation is (a x 2 ) + (b x) + c = 0 This expression, which is known as the quadratic function, yields two answers for x that satisfy the quadratic equation. In our case, the equivalent terms in the quadratic function are noting that [ (d + g)] 2 = (d + g) 2, and a = 1 b = (d + g), c = ((d g) (f e)) Consequently, the eigenvalues are given by the quadratic function

or, somewhat simplified, We now have two values of λ that satisfy our quadratic equation, and these are the two eigenvalues of our 2x2 matrix. We will refer to the larger eigenvalue as λ 1, and the smaller eigenvalue is λ 2. Now we need to find the eigenvectors that correspond to λ 1 and λ 2, respectively. Returning to our example using matrix M, we have the following equation to solve to find the eigenvector associated with λ 1., which can be restated as ((d λ 1 ) x 1 ) + (e y 1 ) = 0 and (f x 1 ) + ((g λ 1 ) y 1 ) = 0. The best we can do with these two equations and their two unknown values (x 1 and y 1 ) is to determine how one of the unknowns relates to the other. In other words, we can determine the slope of the vector. If we arbitrarily choose x 1 = 1, we can rearrange either or both of the previous equations to determine the value of y 1 when x 1 = 1: or An eigenvector that corresponds with eigenvalue λ 1 has the following coordinates: The length or norm of that vector is

Recalling that a unit vector is a vector whose length is 1, we can find the unit vector associated with any vector of arbitrary length by dividing each component of the initial vector by that vector s length or norm. The unit eigenvector that corresponds to eigenvalue λ 1 is We repeat the process to find the coordinates of the unit eigenvector that corresponds to eigenvalue λ 1. Worked example. Given the matrix M below, calculate the eigenvalues and the corresponding unit eigenvectors. Solution. The eigenvalues are and We refer to the larger eigenvalue (8.16228) as λ 1, and the smaller (1.83772) is λ 2. An eigenvector associated with λ 1 is

. The unit eigenvector associated with λ 1 is found by dividing the individual components of the previous vector by the vector s length = {0.811242, 0.58471} An eigenvector associated with λ 2 is and the corresponding unit eigenvector associated with λ 2 is = {0.58471, -0.811242} In the specific case in which we have a 2-D Lagrangian strain tensor the eigenvalues are given by The larger eigenvalue is λ 1, and the smaller eigenvalue is λ 2. An eigenvector corresponding to λ 1 is.

, and an eigenvector associated with λ 2 is The unit eigenvectors can then be determined by dividing each of the components of these vectors by their length or norm. The unit eigenvector associated with eigenvalue λ 1 is. The unit eigenvector associated with eigenvalue λ 2 is. References Ferguson, J., 1994, Introduction to linear algebra in geology: London, Chapman & Hall, 203 p., ISBN 0-412-49350-0. Online resources Eigenvalue calculator, accessed 25 June 2012 via http://www.wolframalpha.com/entities/calculators/eigenvalue_calculator/gv/9h/bi/ Weisstein, E.W., Eigenvalue: MathWorld A Wolfram web resource, accessed 25 June 2012 via http://mathworld.wolfram.com/eigenvalue.html Weisstein, E.W., Eigenvector: MathWorld A Wolfram web resource, accessed 25 June 2012 via http://mathworld.wolfram.com/eigenvector.html Weisstein, E.W., Quadratic equation: MathWorld A Wolfram web resource, accessed 25 June 2012 via http://mathworld.wolfram.com/quadraticequation.html

Eigenvalue calculator, accessed 25 June 2012 via http://www.wolframalpha.com/entities/calculators/eigenvalue_calculator/gv/9h/bi/