On an Orthogonal Method of Finding Approximate Solutions of Ill-Conditioned Algebraic Systems and Parallel Computation
|
|
- Sabrina Anderson
- 5 years ago
- Views:
Transcription
1 Proceedings of the World Congress on Engineering 03 Vol I, WCE 03, July 3-5, 03, London, UK On an Orthogonal Method of Finding Approximate Solutions of Ill-Conditioned Algebraic Systems and Parallel Computation M Otelbaev, B I Tuleuov, D Zhussupova Abstract A new method of finding approximate solutions of linear algebraic systems with ill-conditioned or singular matrices, using Schmidt orthogonalization, is presented This method can be effectively used for arranging parallel computations for matrices of large size Index Terms ill conditioned matrices, eigenvalues, approximate solutions, parallel computation, Schmidt orthogonalization This work is to continue [], where we have considered an equation Ax = f, ) here A is a quadratic matrix of order n and f is n- dimensional vector In in the paper [] we solved a problem of parallelization of the problem ) solving process and constructed sufficiently effective parallel algorithm for matrix A with bounded inverse In this paper we suggest a method for finding approximate solutions and parallel computation of the problem ), when matrix A is noninvertible or ill-conditioned Efficiency increase problem of solving large system of linear equations depends on development of high-effective calculating technics Today multiprocessor systems and supercomputers is highly developed Distribution of calculations into parallel branches implies the increase of solving general problem Parallel computation of linear algebraic problems have been considered, for example, in monographs [-4], and program implementation issues in [5] The difference of an offered method from the known consists that existence of zero eigenvalues of a matrix A doesn t influence in any way efficiency of iterative process Only small but nonzero eigenvalues of A A are important Besides, estimates obtained by us in the theorem 3 for the solution doesn t depend on small and nonzero eigenvalues of a matrix A A Also we will notice that the iterative formula 3), as far as we know, in computing practice wasn t applied earlier We denote by A adoint matrix of A Nonnegative square roots of eigenvalues of nonnegative matrix A A we denote by s A) = s, =,, ) and numerate them in non-increasing order taking into account their multiplisities Manuscript received March 3, 03 M Otelbaev is with the Department of Mechanics and Mathematics, L N Gumilyov Eurasian National University, Astana, Kazakhstan otelbaevm@mailru B I Tuleuov is with the Department of Mechanics and Mathematics, L N Gumilyov Eurasian National University, Astana, Kazakhstan berik t@yahoocom D Zhussupova is with the Department of Mechanics and Mathematics, L N Gumilyov Eurasian National University, Astana, Kazakhstan zhusdinara@gmailcom Orthonormal eigenvectors of operator A A corresponding to s we write as e =,, ; A Ae = s e ) Numbers s A) s A) s n A) are called singular values of matrix A Note that the notion ill-conditioned matrix is relative, which often depends on hardware capabilities and its boundary moves away together with the power increase of computers Let s say that the matrix A is ill-conditioned, if A A is large, when A is nonsingular Further vector s Euclidean norm and modulus of the number we write as, and operator s norm of matrix as, and scalar product as, Let A and f be from ), for ε 0 consider functional J ε x) = Ax f + ε x We will find x, which will be a solution of the problem inf J ε x) = J ε x) ) In the left-hand side ) infimum is taken with respect to all vectors x in R n Since unit ball in R n is compact, solution of ) exists Remark : If matrix A is invertible, for ε = 0 problem ) has unique solution x = A f, if A is noninvertible, A x gives the best approximation f by elements Ax If ε 0 and A is invertible, x is the approximate solution of equation Ax = f If ε = 0 and matrix A is noninvertible, the problem may have several solutions, in this case we search for solution with minimal norm Lemma : If ε 0 and x is the solution of ), A A x f) + ε x = 0 Proof Let x be the solution of ) and ω = A A x f)+ ε x 0 Consider J ε x + δω) We have J ε x+δω) = J ε x)+δ A A x f)+ε x, ω +δ Aω +ε ω ) = J ε x) + δ ω + δ Aω + ε ω ) Let a number δ satisfy the following conditions δ < 0, δ > δ Aω + ε ω ω Such a choice is possible by assumption ω A A x f) + ε x 0 Then we get J ε x + δω) < J ε x) and it is contradiction Lemma is proved Lemma : Let ε 0 and x be the solution of ) Then for all x H we have εx + A Ax f) = ε + A A)x x) Proof It easily follows from Lemma ISBN: ISSN: Print); ISSN: Online) WCE 03
2 Proceedings of the World Congress on Engineering 03 Vol I, WCE 03, July 3-5, 03, London, UK Now we define sequence x =,, ) by the following formula x = δ [E δa A + εe)] k A f, 3) where δ satisfies the condition 0 < δ < A A + ε 4) Theorem : Let ε 0, δ given by 4) and x be the solution of ), x be constructed by 3) Then x x = [E δa A + εe)] x 5) and x converges to x as + at the geometric rate, ie there exists ρ > 0 and x x C ρ, 6) where C is the constant which depends on δ and ε Proof By using lemma we have A f = ε x + A A x Substituting A f to 3) we get x = δ [E δa A + εe)] k [ε x + A A x] = [E δa A + εe)] k [E E + δε + A A)] x = E δa A + ε)) x + x It implies 5) Further, since matrix E δa A+εE) is self-adoint, its norm is equal to maximum of modulus of eigenvalues Its eigenvalues are δs + ε), =,,, n) If for each =,,, n eigenvalues satisfy we get < δs + ε) < E δa A + εe) < 7) These inequalities hold if conditions δ max s +ε) < =,,,n and δ > 0 take place But max s = A A The =,,,n condition 4) follows 7) and by 7) we get 6) The proof of theorem is complete Note that results similar to theorem for linear ill-posed problems have been obtained in [6] see [6], p 38) The space generated by eigenvectors of matrix A A corresponding to zero eigenvalues we denote by R n) 0, i e if x R n) 0 x = n k= 0 x e and A Ae k = 0 for k = 0,, n R n) 0 is the kernel of matrix A A If matrix A is invertible, the space R n) 0 is empty Lemma 3: If x R n) 0 A f, x = 0, i e A f belongs to R n) R n) 0 which is the orthogonal complement of R n) 0 Proof For ε = 0 by using lemmas and we obtain A f = A A x Let x R n) 0, A f, x = A A x, x = x, A Ax = 0 This completes proof of the lemma Lemma 4: If ε > 0 and x is the solution of ), x R n) 0, x, x = 0, ie x belongs to R n) R n) 0 Proof It easily follows from Lemmas and 3 Note that for ε = 0 the solution of ) is determined up to a term, which is the solution of equation Ax = 0, but sequence x =,, ) by lemma converges to the solution of ) belonging to R n) R n) 0 In further for ε = 0 we take as x0) the limit of sequence x from 3) Obviously the solution of ) depends on ε So sometimes we write x = xε) We have Lemma 5: If x0) is the solution of ), for every ε > 0 and δ 0 xε) = A A + εe) A A xo) = A A + εe) A f, x0) = E + εa A) ) xε), xε) = A A + εe) A A + δe) xδ), xε) xδ) = δ ε)a A + εe) xδ) Proof It easily follows from Lemma From proved lemmas and theorem we state Theorem : a) The solution xε) of ) continuously depends on ε > 0 and xε) = A A + εe) A f b) For + the limit of sequence x ε) from 3) continuously depends on ε 0 c) If s s s 0 > 0, s 0+ = s 0+ = = 0 are the eigenvalues of matrix A A and e, e,, e n are corresponding orthonormal system of eigenvectors, xε) ε > 0) is the solution of ) and x ε) from 3), xε), x ε) R n) R n) 0, =,, ) x k ε) x k ε) = δs k + ε)) x k 0), k 0, x k ε) = x k ε), 0 + k 8) Here x k ε) = x ε), e k, x k ε) = xε), e k d) Number ρ > 0 from theorem is defined by ρ = max{ δs 0 + ε)), δ A A + ε))} < Note that if matrix A A hasn t zero eigenvalues 0 is taken as n The item c) of theorem implies that vector xε) for ε = 0 has minimal norm among all solutions of problem Furthermore for each ε 0 xε) and x ε) =,, ) belong to subspace R n) R n) 0, where Rn) 0 is the kernel of matrix A A Below we suggest one method of parallel computation for solving problem ) based on theorems and Let n be large enough integer and we have N +-processor system Let k 0, k,, k N are integers such that k m + < k m, m = 0,,, N, k 0 = 0, k N = n We define matrices A m a km +, a km +, a km +,3 a km +,n a km, a km, a km,3 a km,n ISBN: ISSN: Print); ISSN: Online) WCE 03
3 Proceedings of the World Congress on Engineering 03 Vol I, WCE 03, July 3-5, 03, London, UK and A ) m ã km +, ã km +, ã km +,3 ã km +,n, ã km, ã km, ã km,3 ã km,n m =,,, N, in such a way that lines numerated from k m + to k m coincide with those of matrices A and A respectively Here ã k and a k are elements of A and A such that ã k = a k Also we use vectors ω = [E δa A + εe)] ω, ω 0 = δa f, =,, Then formula 3) can be written in the following way x + = x + ω, x = ω 0, =,, Before computation matrices A m and A ) m are passed to processors C m m =,,, N) and vector ω 0 = δa f to root processor C N+ Algorithm works as follows ) Processor C N+ forms -th approximation x and passes vector ω to processors C m m =,,, N) Each processor C m calculates A m ω spending k m k m )n multiplications, k m k m )n ) additions and sends vector to C N+ ) C N+ forms vector Aω = N A m ω and m= spends n )N additions C N+ transmits vector Aω 3) Processor C m calculates A ) m Aω and sends it to C N+, spending k m k m )n multiplications and k m k m )n ) additions C m transmits A ) m Aω to processor C N+ 4) Summing up recieved vectors C N+ gets A Aω ) = N A ) m Aω Root processor m= calculates ω = δε)ω δa ) m Aω and forms approximate solution x + = x + ω, =,, It spends n multiplications and n )N + n additions Amount of operations per cycle which root processor C N+ carries out consists of n multiplications and n )N + n additions Each processor C m m =,,, N) spends k m k m )n multiplications and k m k m )n ) additions per cycle All processors spend n multiplications and nn ) additions per cycle Since s iterations all computers spend sn +n +n multiplications and snn )+nn ) additions Division is absent It follows from theorem that for effectivicity of iteration formula 3) the existence of small but nonzero eigenvalues of matrix A A are important, but existence of zero eigenvalues of the matrix A doesn t play any role! Therefore we come to the question: Is it possible to reduce noises due to nonzero small eigenvalues of matrix A A? It turns out that it is possible We demonstrate it by simple example Let ε = 00 and A = ), f = ) ) Matrix A A = ) ) has a small nonzero eigenvalue Iterative process by formula 8) may last long However, if matrix A is replaced with its approximation ) ) à =, f = 0) we have à à = ) ) = 0 ) The eigenvalues of this matrix are equal to λ = 0, λ = 0 and its norm is 0 So δ may be taken from interval 0, 0 ) Let s take δ = 0 < 0 Then by theorem we obtain ρ = 0 Therefore the problem with matrix à and vector f from 0) is solved in one step ) The solution of problem is vector Equation ) x à = x 6) ) x has solution vector such that x x +x = Vector ) satisfies it and has a minimal norm among all vectors The vector found will be the approximate solution of ) ) with matrix A and vector f from 9) Indeed ) ) ) 0 = 3 300) We have A x f = Recall that we try to reduce a norm Ax f increasing ) the norm ) x not too much) x 4 The vector x = = is the actual solution of x 6 system ) ) ) x = x 6006 For x = we get ) And for ) A x f + ε x = 0 ) ) 4 6 =0006) A x f + ε x = ) = 04 ) Therefore for ε = 00 vector x = is closer to solution ) 4 of problem than x = 6 This simple idea tracked on simple example we will develop in the next work with matrices arisen in solving numerically ill-posed direct and inverse problems of mathematical physics In general this effect is not always possible But we have Theorem 3: Let ε 0 and x =,, ) be the sequence of vectors from 3) Then ISBN: ISSN: Print); ISSN: Online) WCE 03
4 Proceedings of the World Congress on Engineering 03 Vol I, WCE 03, July 3-5, 03, London, UK a) If γ > 0 and is chosen to satisfy δγ + ε)) γ, the following inequality Ax ε) f f γ + γ xε) + A xε) f holds b) If is given by δγ + ε)) γ, A A x ε) xε)) γ [ A A x + x ] ; c) If is given by δγ + ε)) 5 A γ, A A x ε) xε)) 8γ f ; d) For ε = 0, if is given by Proof Let ε 0 and inf {x} δγ) 5 A γ, A A x 0) f) 8γ f ) Ax f + ε x ) = A xε) f + ε xε) For any vector u we have Au = Au, Au = A Au, u = A A) u Therefore, as A A) e k = s k e k and using 8) from theorem we have A x ε) xε)) = A A) x ε) xε)) = s k x k ε) x k ε)) = Hence for all γ > 0 we get A x ε) xε)) = s k γ s k s k δs k + ε) ) xk ε) s k >γ s k δs k + ε) ) xk ε) δs k + ε) ) xk ε) δγ + ε)) s k x k ε) + γ x k ε) s k >γ s k γ δγ + ε)) n s k x k ε) + γ x k ε) = δγ + ε)) A A) xε) + γ xε) = δγ + ε)) A xε) + γ xε) ) But ) A xε) = A xε) f + f A xε) f + f = = [inf {x} Ax f + ε x )) ] + f f + f ) = 4 f 3) Using this estimate and ) we arrive at the estimate A x ε) xε)) 4 [ δγ + ε)] f + γ xε) Then Ax ε) f = A x ε) xε)) + A xε) f If it take place conditions [ A x ε) xε)) + A xε) f ] f δγ + ε)) + xε) γ + A xε) f δγ + ε)) γ we have from the last inequality Ax ε) f f γ + γ xε) + A xε) f It implies item a) of theorem Further, using 8) we have A A x ε) xε)) = s 4 k δs k + ε) ) xk ε) = s k >γ s 4 k s k γ s 4 k s k >γ s 4 k s k γ s 4 k It follows from the inequality above that δs k + ε) ) xk ε) δs k + ε) ) xk ε) δs k + ε) ) xk ε) δs k + ε) ) xk ε) 4) A A x ε) xε)) δγ + ε)) A A x + γ x If is taken from we obtain δγ + ε)) γ, A A x ε) xε)) γ [ A A x + x ] It implies assertion of item b) of theorem By 4) we have also A A x ε) xε)) + δγ + ε)) s k x k ε) s k x k ε) = δγ + ε)) A A x k ε) + γ A A x k ε) = δγ + ε)) A A x k ε) f) + A f +γ A x k ε) δγ + ε)) A A x k ε) f) + A f ) But in view of 3) we get following inequalities + γ A x k ε) A A x k f) A A x k + f ) 5 A f, A x k ε) 4 f ISBN: ISSN: Print); ISSN: Online) WCE 03
5 Proceedings of the World Congress on Engineering 03 Vol I, WCE 03, July 3-5, 03, London, UK Therefore A A x ε) xε)) δγ + ε)) [ 5 A f + A f ] + 4γ f Choosing from we get 0 δγ + ε)) A f + 4γ f δγ + ε)) 0 A 4γ, A A x ε) xε)) 8γ f In the case of ψ + = 0 or ψ + e + α k A ψ k ε put ψ + = 0 Calculation of A ψ k doesn t meet difficulty If ψ k = 0, we get A ψ k = 0, otherwise ψ k 0, in view of formula ψ k+ = Ae k+ we obtain the representation k α l ψ l, l= ψ k+ = AΘ k+, This completes assertion of item c) of the theorem For ε = 0 using lemma we have A A x = Af Therefore the item c) of the theorem implies the item d) This completes proof of the theorem Usually it is important in practice to reduce difference Ax f less important to find solution of equation Ax = f!) Thus the theorem allows to solve problem ) effectively Note that usage of formula for x doesn t require ε > 0 Much more suitable case is ε = 0 Now we can suggest the next numerical algorithm for the problem ) based on the theorem 3 We can form sufficiently effective process of solving problem ) with ill-conditioned or non-invertible matrix The algorithm will distinguished be from above one only by these points: It is chosen γ > 0 stands for accuracy) The number ε is chosen to be zero Item d) condition of the theorem 3 is verified after every cycle iteration Computation is finished when condition ) holds Now we will describe in brief Schmidt orthogonalization process Let e,, e n be a basis in H and A is ill-conditioned matrix Then Ae = a, a,, a n ), A = {a i } We will orthogonalize {Ae } n = by Schmidt Fix a number ε > 0 If Ae > ε, we put ψ = Ae ) Ae Otherwise, if Ae ε, put ψ = 0 Let vectorsψ,, ψ are constructed We define where α k = Now we define ψ + If ψ + = Ae + α k ψ k, { 0, if ψ k = 0 Ae +, ψ k, if ψ k 0 where Θ k+ is defined recurrently Finally we get ψ,, ψ n, which satisfy the conditions ψ i, ψ = 0, as i, ψ = or ψ = 0 Now put f = f, ψ ψ = f, ψ AΘ = ψ 0 ψ = A f, ψ Θ ψ 0 ψ We get now as the approximate solution x to the problem the vector A inf Ax f = x f x H x = f, ψ Θ ψ 0 ψ It can be shown that the inequality A x f Cε), holds with Cε) 0 as ε 0 Some of results of this work have been announced in [7] see also [8]) REFERENCES [] Zh A Baldybek, M Otelbaev Problem of Parallelization for a Linear Algebraic System // Matematicheskii Zhurnal, Vol, No 39), pp 53-58, in Russian [] V V Voevodin and Vl V Voevodin, Parallel Computations BKhV- Peterburg, St Petersburg, 00, p608), in Russian [3] JM Ortega, Introduction to parallel and vector solution of linear systems Plenum Press, New York, 98 [4] Dimitri P Bertsekas, John N Tsitsiklis Parallel and Distributed Computation: Numerical Methods, 997 [5] Brucel P I Parallel programming An introduction Prentice Hall International UK) Limited, 993 p 70 [6] Israel Gohberg, Seymour Goldberg, Marinus A Kashoek Basic Classes of Linear Operators Birkhauser Verlag, 003, p 38 [7] Otelbaev M, Tuleuov B, Zhusupova D On a Method of Finding Approximate Solutions of Ill-conditioned Algebraic Systems and Parallel Computation //Eurasian Mathematical Journal, Vol, No, 0, pp49-5 [8] Otelbaev M, Zhusupova D, Tuleuov B, Parallelization of linear algebraic system with the invertible matrix // Vestnik Bashkirsk Univ, vol 6, No 4, 0, pp9-33, in Russian ψ + 0 we put ψ + e + α k A ψ k > ε, ψ + = ψ + ψ+ ISBN: ISSN: Print); ISSN: Online) WCE 03
Linear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationA Solution of a Tropical Linear Vector Equation
A Solution of a Tropical Linear Vector Equation NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave., St. Petersburg, 198504 RUSSIA nkk@math.spbu.ru
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationOn compact operators
On compact operators Alen Aleanderian * Abstract In this basic note, we consider some basic properties of compact operators. We also consider the spectrum of compact operators on Hilbert spaces. A basic
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationFrom Wikipedia, the free encyclopedia
1 of 8 27/03/2013 12:41 Quadratic form From Wikipedia, the free encyclopedia In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. For example, is a quadratic
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationLEAST SQUARES SOLUTION TRICKS
LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationA Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints
A Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave.,
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMET Workshop: Exercises
MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When
More informationINNER PRODUCT SPACE. Definition 1
INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of
More informationMath 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES
Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Recap Yesterday we talked about several new, important concepts
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationLinear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationimplies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the
Math 395. Geometric approach to signature For the amusement of the reader who knows a tiny bit about groups (enough to know the meaning of a transitive group action on a set), we now provide an alternative
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More information1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,
Abstract Hilbert Space Results We have learned a little about the Hilbert spaces L U and and we have at least defined H 1 U and the scale of Hilbert spaces H p U. Now we are going to develop additional
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationx i e i ) + Q(x n e n ) + ( i<n c ij x i x j
Math 210A. Quadratic spaces over R 1. Algebraic preliminaries Let V be a finite free module over a nonzero commutative ring F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v)
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationClasses of Linear Operators Vol. I
Classes of Linear Operators Vol. I Israel Gohberg Seymour Goldberg Marinus A. Kaashoek Birkhäuser Verlag Basel Boston Berlin TABLE OF CONTENTS VOLUME I Preface Table of Contents of Volume I Table of Contents
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationThe Cartan Dieudonné Theorem
Chapter 7 The Cartan Dieudonné Theorem 7.1 Orthogonal Reflections Orthogonal symmetries are a very important example of isometries. First let us review the definition of a (linear) projection. Given a
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationExam questions with full solutions
Exam questions with full solutions MH11 Linear Algebra II May 1 QUESTION 1 Let C be the set of complex numbers. (i) Consider C as an R-vector space with the operations of addition of complex numbers and
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationClass notes: Approximation
Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationSOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA
, pp. 23 36, 2006 Vestnik S.-Peterburgskogo Universiteta. Matematika UDC 519.63 SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA N. K. Krivulin The problem on the solutions of homogeneous
More informationON THE SINGULAR DECOMPOSITION OF MATRICES
An. Şt. Univ. Ovidius Constanţa Vol. 8, 00, 55 6 ON THE SINGULAR DECOMPOSITION OF MATRICES Alina PETRESCU-NIŢǍ Abstract This paper is an original presentation of the algorithm of the singular decomposition
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationEXPLICIT UPPER BOUNDS FOR THE SPECTRAL DISTANCE OF TWO TRACE CLASS OPERATORS
EXPLICIT UPPER BOUNDS FOR THE SPECTRAL DISTANCE OF TWO TRACE CLASS OPERATORS OSCAR F. BANDTLOW AND AYŞE GÜVEN Abstract. Given two trace class operators A and B on a separable Hilbert space we provide an
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationq n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.
Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationThen x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r
Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you
More informationSpring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier
Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be
More informationFamily Feud Review. Linear Algebra. October 22, 2013
Review Linear Algebra October 22, 2013 Question 1 Let A and B be matrices. If AB is a 4 7 matrix, then determine the dimensions of A and B if A has 19 columns. Answer 1 Answer A is a 4 19 matrix, while
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More information2. Every linear system with the same number of equations as unknowns has a unique solution.
1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationHigher rank numerical ranges of rectangular matrix polynomials
Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationRemark on Hopf Bifurcation Theorem
Remark on Hopf Bifurcation Theorem Krasnosel skii A.M., Rachinskii D.I. Institute for Information Transmission Problems Russian Academy of Sciences 19 Bolshoi Karetny lane, 101447 Moscow, Russia E-mails:
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More information6 Inner Product Spaces
Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationMatrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury
Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is
More information1 Principal component analysis and dimensional reduction
Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationProperties of Linear Transformations from R n to R m
Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More information