Block-Diagonal Decomposition of Matrices based on -Algebra for Structural Optimization with Symmetry Property
|
|
- Ashley Doyle
- 5 years ago
- Views:
Transcription
1 Block-Diagonal Decomposition of Matrices based on -Algebra for Structural ptimization with Symmetry Property Yoshihiro Kanno Kazuo Murota Masakazu Kojima Sadayoshi Kojima University of okyo (Japan) okyo Institute of echnology (Japan) June 16, 2008
2 simultaneous block-diagonal decomposition given: A 1,A 2,...,A m symmetric n n find: P R n n orthogonal A = 1 symm. A 2 = symm. A 3 = symm. simultaneous block diagonalization P A 1P= P A 2P= P A 3P= CJK-SM 08 2/14
3 simultaneous block-diagonal decomposition: application K(x) : stiffness matrix x : design variables given: K 1,K 2,...,K m find: P R n n orthogonal K(x)=x 1 K 1 +x 2 K 2 +x 3 K 3 CJK-SM 08 3/14
4 simultaneous block-diagonal decomposition: application K(x) : stiffness matrix x : design variables given: K 1,K 2,...,K m find: P R n n orthogonal K(x)=x 1 K 1 +x 2 K 2 +x 3 K 3 simultaneous block diagonalization P K 1 P P K 2 P P K(x)P =x1 +x 2 +x 3 P K 3 P CJK-SM 08 3/14
5 application: SDP min s.t. m p=1 b py p C m p=1 A py p variables : y 1,...,y m coefficients : b 1,...,b m, A 1,...,A m,c S n n n symmetric matrices X X is positive semidefinite nonlinear but convex CJK-SM 08 4/14
6 application: SDP min s.t. m p=1 b py p C m p=1 A py p truss optimization (with eigenvalue constraints) min s.t. vol(x) Ω 1 (x) Ω can be solved by SDP [hsaki et al. 99] as min s.t. c x m p=1 (K p ΩM p )y p optimization with buckling load factor constraints robust structural optimization CJK-SM 08 4/14
7 application: SDP min s.t. m p=1 b py p C m p=1 A py p P A 1P= P A 2P=... P CP= C (1) C (l) m p=1. m p=1 A (1) p y p A (l) p y p CJK-SM 08 4/14
8 application: SDP min s.t. m p=1 b py p C m p=1 A py p P A 1P= P A 2P=... P CP= group theory [Gatermann & Parrilo 04], [Bai & de Klerk 07] [de Klerk & Sotirov 07] matrix -algebra [de Klerk, Pasechnik & Scrijver 07] aim: numerical algorithm which can be executed without any knowledge of symmetry property in advance CJK-SM 08 4/14
9 matrix -algebra R n n (set of n n matrices) I n If A, B, α R, then A + B AB αa A is a matrix -algebra CJK-SM 08 5/14
10 structure theorem 1. Q : orthogonal s.t. Q Q = S 1 S S l, S j j :simple 2. If j is simple, then ˆP : orthogonal s.t. B j ˆP B j j ˆP =..., B j B j ˆ j : irreducible CJK-SM 08 6/14
11 structure theorem: transformations to simple components Q : orthogonal, for any A, Q AQ= S 1 no common eigenvalues S 2 & irreducible components ˆP : orthogonal, for any A, CJK-SM 08 7/14
12 structure theorem: transformations to simple components Q : orthogonal, for any A, Q AQ= S 1 & irreducible components ˆP : orthogonal, for any A, B 1 B1 S 2 no common eigenvalues muptiplicity = 3 muptiplicity = 2 no multiplicity P AP= B 1 B 2 no multiplicity B 2 CJK-SM 08 7/14
13 algorithm (1) input: A 1,A 2,...,A m 1. Generate random r 1,...,r m. Put X m i=1 r ia i. 2. Eigenvalue decomposition of X: α 1 I m1 Q α 2 I m2 XQ =..., Q = [ Q 1,Q 2,...,Q k ]. α k I mk 3. Compute Q A 1 Q,...,Q A m Q. CJK-SM 08 8/14
14 algorithm (1) input: A 1,A 2,...,A m 1. Generate random r 1,...,r m. Put X m i=1 r ia i. 2. Eigenvalue decomposition of X: 3. Compute Q A 1 Q,...,Q A m Q. 4. Rearrange eigenvectors Q as ˆQ = [ Q[K 1 ],Q[K 2 ],...,Q[K l ] ] so that Q ^ A Q= ^ Q ^ 1 A 2Q= ^... which is the decomposition to simple components. CJK-SM 08 8/14
15 input: algorithm (2) Â 1, Â2,...,Âm simple 1. We know that the simple component is decomposed as P AP= ^ B B B muptiplicity = 3 2. e.g. B R 2 2, P ÂP can be rearranged as P Â P b 11 I b 12 I b 13 I = b 21 I b 22 I b 23 I, I = b 31 I b 32 I b 33 I [ ] 1 0. ( ) 0 1 We first find the transformation ( ). CJK-SM 08 9/14
16 input: algorithm (2) Â 1, Â2,...,Âm simple 1. In order to obtain P Â P = b 11 I b 12 I b 13 I b 21 I b 22 I b 23 I b 31 I b 32 I b 33 I, I = [ ] 1 0, ( ) 0 1 we use the form of b ij P i P j = Âij. 2. Put P 1 = I. hen, b 31 P 3 P 1 = Â31 = P 3 = Â31/b 31 b 23 P 2 P 3 = Â23 = P 2 = Â23P 3 /b 23 CJK-SM 08 9/14
17 algorithm (1) & (2) (1) simple components (2) irreducible components Q ^ A Q= ^ Q ^ 1 A 2Q= ^... P A 1P= P A 2P=... CJK-SM 08 10/14
18 algorithm (1) & (2) (1) simple components (2) irreducible components advantages: algorithm uses Q ^ A Q= ^ Q ^ 1 A 2Q= ^... P A 1P= P A 2P=... only linear algebraic computations (e.g. eigenvalue decomposition) no knowledge of algebraic structure (symmetry property) execution is free from group representation theory or matrix -algebra CJK-SM 08 10/14
19 ex.) spherical dome spherical lattice dome D 6 -symmetry CJK-SM 08 11/14
20 ex.) spherical dome spherical lattice dome D 6 -symmetry variable linking 3 kinds of members K(x) = x 1 K 1 + x 2 K 2 + x 3 K 3 21 DF Ki = symm CJK-SM 08 11/14
21 ex.) spherical dome spherical lattice dome D 6 -symmetry variable linking 3 kinds of members K(x) = x 1 K 1 + x 2 K 2 + x 3 K 3 21 DF P KiP = Ki = symm CJK-SM 08 11/14
22 ex.) cubic truss (tetrahedral symm.) d -symmetry (tetrahedral) CJK-SM 08 12/14
23 ex.) cubic truss (tetrahedral symm.) d -symmetry (tetrahedral) variable linking 4 kinds of members K(x) =x 1 K 1 + +x 4 K 4 24 DF Ki = symm CJK-SM 08 12/14
24 ex.) cubic truss (tetrahedral symm.) d -symmetry (tetrahedral) variable linking 4 kinds of members K(x) =x 1 K 1 + +x 4 K 4 24 DF P KiP = Ki = symm CJK-SM 08 12/14
25 ex.) cubic truss (sparsity) remove 4 members variable linking 3 kinds of members K(x) =x 1 K x 3 K 3 24 DF cf. original: CJK-SM 08 13/14
26 ex.) cubic truss (sparsity) remove 4 members variable linking 3 kinds of members K(x) =x 1 K x 3 K 3 24 DF cf. original: block-diagonal decomposition = geometric symmetry + sparsity of matrices CJK-SM 08 13/14
27 ex.) cubic truss (sparsity) remove 4 members variable linking 3 kinds of members K(x) =x 1 K x 3 K 3 24 DF cf. original: P KiP = sparsity gives finer decomposition 2 CJK-SM 08 13/14
28 conclusions simultaneous block-diagonalization algorithm input: A 1,A 2,...,A m : symmetric matrices output: block-diagonal decomposition (common block-structure) validity: matrix -algebra application: member stiffness matrices of structure with symmetric configuration uses only linear algebraic computation does not use any knowledge of symmetry property is free from group representation theory / matrix -algebra CJK-SM 08 14/14
A NUMERICAL ALGORITHM FOR BLOCK-DIAGONAL DECOMPOSITION OF MATRIX -ALGEBRAS
A NUMERICAL ALGORITHM FOR BLOCK-DIAGONAL DECOMPOSITION OF MATRIX -ALGEBRAS KAZUO MUROTA, YOSHIHIRO KANNO, MASAKAZU KOJIMA, AND SADAYOSHI KOJIMA (SEPTEMBER 2007 / REVISED OCTOBER 2008) Abstract. Motivated
More informationA Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part I: Proposed Approach and Application to Semidefinite Programming
A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part I: Proposed Approach and Application to Semidefinite Programming Kazuo Murota, Yoshihiro Kanno, Masakazu Kojima, Sadayoshi
More informationNumerical block diagonalization of matrix -algebras with application to semidefinite programming
Math. Program., Ser. B (2011) 129:91 111 DOI 10.1007/s10107-011-0461-3 FULL LENGTH PAPER Numerical block diagonalization of matrix -algebras with application to semidefinite programming Etienne de Klerk
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationA Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm
A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm Takanori Maehara and Kazuo Murota May 2008 / May 2009 Abstract An algorithm is proposed for finding
More informationMIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???
MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present
More informationDimension reduction for semidefinite programming
1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationBlock diagonalization of matrix-valued sum-of-squares programs
Block diagonalization of matrix-valued sum-of-squares programs Johan Löfberg Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden WWW:
More informationThe Q Method for Symmetric Cone Programmin
The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationExercise Set 7.2. Skills
Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize
More informationEdexcel GCE A Level Maths Further Maths 3 Matrices.
Edexcel GCE A Level Maths Further Maths 3 Matrices. Edited by: K V Kumaran kumarmathsweebly.com kumarmathsweebly.com 2 kumarmathsweebly.com 3 kumarmathsweebly.com 4 kumarmathsweebly.com 5 kumarmathsweebly.com
More informationI. Multiple Choice Questions (Answer any eight)
Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY
More informationA A x i x j i j (i, j) (j, i) Let. Compute the value of for and
7.2 - Quadratic Forms quadratic form on is a function defined on whose value at a vector in can be computed by an expression of the form, where is an symmetric matrix. The matrix R n Q R n x R n Q(x) =
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationarxiv: v3 [math.co] 10 Nov 2014
arxiv:1408.1088v3 [math.co] 10 Nov 014 Enumeration of monochromatic three term arithmetic progressions in two-colorings of any finite group Erik Sjöland May 6, 018 Abstract There are many extremely challenging
More informationApplications of semidefinite programming in Algebraic Combinatorics
Applications of semidefinite programming in Algebraic Combinatorics Tohoku University The 23rd RAMP Symposium October 24, 2011 We often want to 1 Bound the value of a numerical parameter of certain combinatorial
More informationCompression, Matrix Range and Completely Positive Map
Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n
More informationPre- and Post-processing Sum-of-squares Programs in Practice
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL X, NO X, JANUARY 00X Pre- and Post-processing Sum-of-squares Programs in Practice Johan Löfberg Abstract Checking non-negativity of polynomials using sum-ofsquares
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationDiagonalization of Matrix
of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich
More informationInstitute of Structural Engineering Page 1. Method of Finite Elements I. Chapter 2. The Direct Stiffness Method. Method of Finite Elements I
Institute of Structural Engineering Page 1 Chapter 2 The Direct Stiffness Method Institute of Structural Engineering Page 2 Direct Stiffness Method (DSM) Computational method for structural analysis Matrix
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationDSOS/SDOS Programming: New Tools for Optimization over Nonnegative Polynomials
DSOS/SDOS Programming: New Tools for Optimization over Nonnegative Polynomials Amir Ali Ahmadi Princeton University Dept. of Operations Research and Financial Engineering (ORFE) Joint work with: Anirudha
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationSecond Order Cone Programming Relaxation of Positive Semidefinite Constraint
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationAssignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:
Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization
More informationEigenvalue and Eigenvector Homework
Eigenvalue and Eigenvector Homework Olena Bormashenko November 4, 2 For each of the matrices A below, do the following:. Find the characteristic polynomial of A, and use it to find all the eigenvalues
More informationLecture 11: Diagonalization
Lecture 11: Elif Tan Ankara University Elif Tan (Ankara University) Lecture 11 1 / 11 Definition The n n matrix A is diagonalizableif there exits nonsingular matrix P d 1 0 0. such that P 1 AP = D, where
More informationMATH 387 ASSIGNMENT 2
MATH 387 ASSIGMET 2 SAMPLE SOLUTIOS BY IBRAHIM AL BALUSHI Problem 4 A matrix A ra ik s P R nˆn is called symmetric if a ik a ki for all i, k, and is called positive definite if x T Ax ě 0 for all x P R
More informationSDP and eigenvalue bounds for the graph partition problem
SDP and eigenvalue bounds for the graph partition problem Renata Sotirov and Edwin van Dam Tilburg University, The Netherlands Outline... the graph partition problem Outline... the graph partition problem
More informationLecture 15, 16: Diagonalization
Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More information3. Mathematical Properties of MDOF Systems
3. Mathematical Properties of MDOF Systems 3.1 The Generalized Eigenvalue Problem Recall that the natural frequencies ω and modes a are found from [ - ω 2 M + K ] a = 0 or K a = ω 2 M a Where M and K are
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationHW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.
HW2 - Due 0/30 Each answer must be mathematically justified. Don t forget your name. Problem. Use the row reduction algorithm to find the inverse of the matrix 0 0, 2 3 5 if it exists. Double check your
More informationJUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson
JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 7
MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 7. Reversal This chapter talks about time reversal. A Markov process is a state X t which changes with time. If we run time backwards what does it look like? 7.1.
More informationLinear Algebra Solutions 1
Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationLecture 12. Symmetry Operations. NC State University
Chemistry 431 Lecture 12 Group Theory Symmetry Operations NC State University Wave functions as the basis for irreducible representations The energy of the system will not change when symmetry Operations
More informationLecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem
Math 280 Geometric and Algebraic Ideas in Optimization April 26, 2010 Lecture 5 Lecturer: Jesús A De Loera Scribe: Huy-Dung Han, Fabio Lapiccirella 1 Goermans-Williamson Algorithm for the maxcut problem
More informationMultivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013
Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.
More informationChemistry 431. Lecture 14. Wave functions as a basis Diatomic molecules Polyatomic molecules Huckel theory. NC State University
Chemistry 431 Lecture 14 Wave functions as a basis Diatomic molecules Polyatomic molecules Huckel theory NC State University Wave functions as the basis for irreducible representations The energy of the
More informationQuestion: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?
Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues
More informationElements of linear algebra
Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v
More informationEE364 Review Session 4
EE364 Review Session 4 EE364 Review Outline: convex optimization examples solving quasiconvex problems by bisection exercise 4.47 1 Convex optimization problems we have seen linear programming (LP) quadratic
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More information1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations
Math 46 - Abstract Linear Algebra Fall, section E Orthogonal matrices and rotations Planar rotations Definition: A planar rotation in R n is a linear map R: R n R n such that there is a plane P R n (through
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationSeminar 6: COUPLED HARMONIC OSCILLATORS
Seminar 6: COUPLED HARMONIC OSCILLATORS 1. Lagrangian Equations of Motion Let consider a system consisting of two harmonic oscillators that are coupled together. As a model, we will use two particles attached
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationarxiv: v1 [math.oc] 11 Jun 2009
RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION VENKAT CHANDRASEKARAN, SUJAY SANGHAVI, PABLO A. PARRILO, S. WILLSKY AND ALAN arxiv:0906.2220v1 [math.oc] 11 Jun 2009 Abstract. Suppose we are given a
More informationICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization
ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will
More informationLinear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations
Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear
More informationSimple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones.
A matrix is sparse if most of its entries are zero. Simple sparse matrices we have seen so far include diagonal matrices and tridiagonal matrices, but these are not the only ones. In fact sparse matrices
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationSpectral clustering. Two ideal clusters, with two points each. Spectral clustering algorithms
A simple example Two ideal clusters, with two points each Spectral clustering Lecture 2 Spectral clustering algorithms 4 2 3 A = Ideally permuted Ideal affinities 2 Indicator vectors Each cluster has an
More information. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z
Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real
More informationarxiv: v1 [math.oc] 26 Sep 2015
arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,
More informationEigenvalues and Eigenvectors
5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationEC Control Engineering Quiz II IIT Madras
EC34 - Control Engineering Quiz II IIT Madras Linear algebra Find the eigenvalues and eigenvectors of A, A, A and A + 4I Find the eigenvalues and eigenvectors of the following matrices: (a) cos θ sin θ
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationCompound matrices and some classical inequalities
Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationData Preprocessing Tasks
Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can
More informationFast ADMM for Sum of Squares Programs Using Partial Orthogonality
Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with
More informationDegeneracy in Maximal Clique Decomposition for Semidefinite Programs
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting
More informationReceived 8 November 2013; revised 8 December 2013; accepted 15 December 2013
American Journal of Computational Mathematics, 204, 4, 93-03 Published Online March 204 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/0.4236/jasmi.204.42009 Derivative of a Determinant
More informationSparse Sum-of-Squares Optimization for Model Updating through. Minimization of Modal Dynamic Residuals
Sparse Sum-of-Squares Optimization for Model Updating through Minimization of Modal Dynamic Residuals Dan Li 1, Yang Wang 1, 2 * 1 School of Civil and Environmental Engineering, Georgia Institute of Technology,
More informationPrincipal Component Analysis
Principal Component Analysis CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Principal
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More information