Kaczmarz algorithm in Hilbert space
|
|
- Conrad Price
- 5 years ago
- Views:
Transcription
1 STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a Hilbert space from data given by inner products of this element with a given sequence of vectors The main result characterizes sequences of vectors leading to reconstruction of any element in the space This generalizes some results of Kwapień and Mycielski 1 Introduction Let {e n } be a sequence of unit vectors in a Hilbert space H For a given x H we have the numbers { x, e n } We want to reconstruct x from these numbers The sequence {e n } should be linearly dense Define x 0 = x, e 0 e 0, x n = x + x x, e n e n We are interested in when x n x for any x H The sequences {e n } for which this holds will be called effective The formula is called the Kaczmarz algorithm In 1937 Kaczmarz (see [1]) considered this problem in the finite-dimensional case He proved that if dim H < and the sequence {e n } is linearly dense and periodic then it is effective Let P n be the orthogonal projection onto e n Then we have (1) x n = x + (I P n )(x x ), x x n = P n (x x ), x x n = P n P P 1 P 0 x Therefore the sequence {e n } is effective if and only if the operators P n P P 1 P 0 tend to zero strongly Since the norms of these operators are bounded it suffices to get pointwise convergence on a linearly dense subset of vectors, eg on members of the sequence {e n } 2000 Mathematics Subject Classification: Primary 41A65 Key words and phrases: Kaczmarz algorithm, Hilbert space Supported by Research Training Network Harmonic Analysis and Related Problems Contract HPRN-CT The authors were also supported by the Estonian Science Foundation, Grant no 4400 and KBN (Poland), Grant 2 P03A , respectively [123]
2 124 R Haller and R Szwarc The Kaczmarz theorem can now be proved as follows Let dim H < and {e n } be N-periodic For A = P N 1 P 1 P 0 it suffices to show that A n tends to zero We claim that A < 1 If not, there is a vector x such that Ax = x = 1 Then P 0 x Ax = x, hence P 0 x = x Similarly P 1 x = x,, P N 1 x = x, which implies that x e 0, e 1,, e N 1 Since the vectors {e n } N 1 are linearly dense we get x = 0 We now turn to the infinite-dimensional case We recall some basic properties of the algorithm which can be found in [2] By construction the vector x n is a linear combination of e 0, e 1,, e n It can be shown that (2) x n = x, g i e i, where the vectors g n are given by the recurrence relation (3) or (4) By (2) we have g 0 = e 0, g n = e n e n, e i g i m ni g i = e n, m ni = e n, e i x x = x x n + x, g n e n Since by (1) the vectors x x n and e n are orthogonal we get x 2 = x x x, g 0 2, x x 2 = x x n 2 + x, g n 2, n 1 Summing up these equalities gives x 2 lim n x x n 2 = x, g n 2 Therefore the sequence {e n } is effective if and only if (5) x 2 = x, g n 2 for any x H This equation means that {g n } is a tight frame with constant 1 as was already mentioned in [2] We have noticed before that it suffices to check formula (5) on vectors {e n }, provided they form a linearly dense subset in H
3 Kaczmarz algorithm in Hilbert space Characterization of effective sequences The formula (3) implies that g n is a linear combination of the vectors e 0, e 1,, e n, ie (6) c ni e i + e n = g n for some coefficients c nj By (4) these coefficients can be obtained by taking the algebraic inverse matrix to the lower triangular matrix I + M where m M = m 20 m m 30 m 31 m , m ij = e i, e j Namely (I + M) 1 = I + U, where c U = c 20 c c 30 c 31 c The matrix U and the coefficients c ni will play a crucial role in all what follows Since the matrices I + M and I + U are inverse to each other we get (7) (8) UM = MU = M U, U M = M U = M U Our first simple result shows that the matrix U is a contraction in the Hilbert space l 2 (N 0 ) Proposition 1 Let U and M be strictly lower triangular matrices such that MU = UM = U M Then U is a contraction if and only if the matrix M + M + I is positive definite In that case there is a Hilbert space H and vectors {e n } in H such that M + M + I is the Gram matrix of these vectors Proof Let M n and U n denote the truncated matrices given by 0 0 m 10 0 c M n = m n0 m n, 0, U n = c n0 c n,
4 126 R Haller and R Szwarc Then M n and U n are bounded on l 2 (N 0 ) and by assumption M n U n = U n M n = U n M n Assume the matrix M + M + I is positive definite Then the matrix M n + M n + I corresponds to a positive bounded operator on l 2 (N 0 ) Thus 0 (U n + I)(M n + M n + I)(U n + I) = I U nu n Hence U n 1, where denotes the operator norm Consequently, we obtain U 1 The converse implication follows from (M n + I)(I U nu n )(M n + I) = M n + M n + I Indeed, if U 1 then U n 1 Therefore the matrix M n + Mn + I is positive definite, which implies that so also is M + M + I It is then well known that there exist a Hilbert space H and vectors {e n } such that m ij = e i, e j, i > j Now we can state the main result of our paper Theorem 1 The sequence {e n } is effective if and only if it is linearly dense and U is a partial isometry, ie U U is an orthogonal projection Proof Assume {e n } is effective By (5) and by the polar identity we get (9) x, y = x, g n g n, y for any x, y H In particular (10) m ij = e i, e j = e i, g n g n, e j We want to state the formula (10) in terms of matrices on l 2 (N 0 ) Let δ i denote the sequence in l 2 (N 0 ) whose ith entry is 1 and all other entries are 0 We have the following Lemma 1 g n, e j = (UM + M + I)δ j, δ n l 2 (N 0 ) Proof of Lemma 1 Set c nn = 1 Then by (6) we have c ni m ij for j > n, g n, e j = j 1 c ni m ij + c ni m ij for j n i=j
5 Kaczmarz algorithm in Hilbert space 127 Since (I + U)(I + M) = I we get c ni m ij = δ jn, for j n Therefore g n, e j = j 1 m ji δ i, i=j c ni δ i l + δ j, δ n l 2 2 = M δ j, (U + I)δ n l 2 + δ j, δ n l 2 = (UM + M + I)δ j, δ n l 2 Let A = UM + M + I Applying Lemma 1 to (10) and using the Parseval identity gives (11) m ij = Aδ j, δ n δ n, Aδ i l 2 = Aδ j, Aδ i l 2 Let A n = U n Mn +Mn +I Unlike A, the matrices A n correspond to bounded operators on l 2 (N 0 ) Since Mnδ k = M n δ k for n k and U n U strongly we have (12) lim n A na n δ j, δ i l 2 = lim A nδ j, A n δ i l 2 = Aδ j, Aδ i l 2 n On the other hand, the relation M n U n = U n M n implies Hence A na n = M n U nu n M n M n (U n + M n) + M n U n (U n + M n )M n + M n M n + M n + U n M n + M n + I = M n + M n + I + M n U nu n M n M n M n (13) lim n A na n δ j, δ i l 2 = m ij + (UM δ j, UM δ i ) l 2 (M δ j, M δ i ) l 2 Combining (11) (13) yields (14) (UM δ j, UM δ i ) l 2 = (M δ j, M δ i ) l 2 Let F(N 0 ) = span {δ 0, δ 1, } Formula (14) states that the operator U is isometric on H 0 = M (F(N 0 )) It suffices to show that U vanishes on H0 To this end observe that the matrices U and M leave the subspace F(N 0 ) invariant The formula M (U +I) = U implies that U (F(N 0 )) H 0 Taking orthogonal complements of both sides results in H 0 ker U, which completes the proof that U is a partial isometry
6 128 R Haller and R Szwarc Conversely, let U be a partial isometry Hence U is isometric on H 0 = U (F(N 0 )) The formula U (M + I) = M implies that U is isometric on M (F(N 0 )), which is equivalent to (14) Now tracking backwards the proof of the first part implies the formula (10) In particular for i = j we obtain e i 2 = e i, g n 2 for any i 0 This implies effectivity (see comments at the end of the Introduction) Remark 1 Theorem 1 can be used to construct examples of effective sequences of vectors It suffices to come up with a strictly lower triangular partial isometry U Then one has to compute the algebraic inverse I + M of I + U As a result a positive definite matrix M + M + I is constructed The corresponding vectors form a sequence effective in the closed linear span of these vectors Remark 2 Proposition 1 and Theorem 1 can be interpreted as follows There are as many effective sequences among sequences of unit vectors as there are partial isometries among strictly lower triangular contractions on l 2 (N 0 ) This can be compared with a result of Kwapień and Mycielski who showed that if we choose the sequence of unit vectors at random then almost surely we end up with an effective sequence More precisely, fix a probability Borel measure µ on the unit sphere of H, such that the support of µ is linearly dense Then drawing consecutive vectors independently with respect to that measure almost surely yields an effective sequence of vectors 3 Strongly effective sequences For a partial isometry U the carrier space, ie Im U, may vary from the trivial space to the whole space H For instance when the system {e n } is orthonormal we get M = U = 0 The other extreme case is when Im U = H, which is equivalent to U U = I While proving Theorem 1 we showed that for an effective sequence {e n } the carrier space for U coincides with M (F(N 0 )) Hence U U = I if and only if M (F(N 0 )) = l 2 (N 0 ) This occurs exactly when the rows of the matrix M form a linearly dense subset of l 2 (N 0 ) For example this is the case when m n+1,n 0 in the matrix 0 m 10 0 m 21 0 m 32 0
7 Kaczmarz algorithm in Hilbert space 129 It turns out that sequences {e n } with U U = I have many interesting properties Let (15) v in = c nj e j, n > i j=i The next lemma is related to a formula obtained in the proof of Theorem 2 of [2] Lemma 2 For 0 i, j < n the following equalities hold: v in, v jn = (I U nu n )δ j, δ i l 2 Proof Without loss of generality we may assume that j i Then using the fact that I + M and I + U are inverse to each other we get v in, v jn = c ki e k, c lj e l = m kl c lj = m kl c lj k=i = k c ki k=i l=j = c ki δ jk + k=i l=j m kl c lj + l=i+1 = (I U nu n )δ j, δ i l 2 c ki k=i l=j c ki k=i l=k+1 c lj l 1 m kl c lj m lk c ki = δ ij k=i l=i+1 c ki k=i l=j c lj c li The next corollary should be compared with Remark 2 of [2] Corollary 1 Assume U U = I Then for any j 0, (16) e j = c ij e i i=j+1 Proof By Lemma 2 we get v jn 0 as n Hence c ij e i = 0 i=j Since c jj = 1 we get the conclusion Definition 1 The sequence {e n } will be called strongly effective if {e n } n=k is effective for each k 0 In particular dropping finitely many vectors from {e n } does not spoil linear density Hence this is a highly nonorthogonal case Theorem 2 Assume the sequence {e n } is linearly dense in a Hilbert space H Then {e n } is strongly effective if and only if U U = I
8 130 R Haller and R Szwarc Proof By Corollary 1 for any k the sequence {e n } n=k is linearly dense Let M (k) and U (k) denote truncated matrices obtained by removing the first k rows and the first k columns from M and U, respectively These matrices correspond to the sequence {e n } n=k Also U U = I implies (U (k) ) U (k) = I for any k Hence U (k) is a partial isometry Now we can use Theorem 1 to get the conclusion Conversely, suppose that {e n } is strongly effective Let Q k denote the orthogonal projection from l 2 (N) onto the orthogonal complement of {δ 0, δ 1,, δ k 1 } Let U (k) = UQ k Then U (k) = 0 k U (k), where 0 k denotes the k k zero matrix Hence U (k) are partial isometries just as U is But this is possible only if U U and Q k commute On the other hand, if U U commutes with Q k for any k then U U must be diagonal Assume that U U I Then Uδ j = 0 for some j This implies that Mδ j = 0 and consequently e j is orthogonal to all the vectors e i, i > j Hence {e n } cannot be strongly effective (17) Lemma 3 Assume U U = I Then U M = M I, U (M + M + I) = (M + M + I) Proof By taking the inner product with e k in (16) we obtain m jk = c ij m ik for any k 0 Let k j Then (18) For k > j we get (19) 0 = m jk = i=j+1 c ij m ik = i=j i=j+1 c ij m ik = (U M) jk k c ij m ki + i=j i=k+1 c ij m ik = ((M + I)U) jk + (U M) jk = (U M) jk Combining (18) and (19) gives the first equality The second equality now follows by applying U M = U M Theorem 3 Assume the matrix U associated with the sequence {e n } satisfies U U = I Then e i, e j 2 =
9 Kaczmarz algorithm in Hilbert space 131 for any j In particular the conclusion holds if the sequence {e n } is effective and the rows of M are linearly dense in l 2 (N 0 ) Proof Let G = M + M + I Observe that G is nothing other than the Gram matrix of the vectors {e n } Assume that e i, e j 2 < for some j Then v = Gδ j l 2 (N 0 ) By (17) we get U v = v and (U ) n v = ( 1) n v But U 1 and U is strictly upper triangular Hence (U ) n tends to zero strongly, which implies v = 0 This gives a contradiction because v(j) = 1 4 Stationary case Assume e i+1, e j+1 = e i, e j Then the matrix M is constant on diagonals: 0 a 1 0 M = a 2 a 1 0 a 3 a 2 a 1 0 By the Herglotz theorem there is a measure µ on the unit circle such that e i+n, e i = a n = T z n dµ(z) Kwapień and Mycielski showed that the sequence {e n } is effective if and only if either µ is the Lebesgue measure (orthogonal case) or it is singular with respect to the Lebesgue measure We now reprove this result by applying our Theorem 1 Also U is constant on diagonals, ie it is a Toeplitz operator, 0 u 1 0 U = u 2 u 1 0 u 3 u 2 u 1 0 It is then unitarily equivalent to the multiplication operator on H 2 (T) with the function u(z) = u n z n n=1
10 132 R Haller and R Szwarc Moreover U = u(z) H (T) 1 Now multiplication with u(z) is a partial isometry if and only if the boundary values of u(z) are 0 or 1 By the F Riesz and M Riesz theorem, u(e iθ ) 0 or u(e iθ ) 1 The first case corresponds to the orthogonal case, because M = 0 The second case is equivalent to the singularity of the spectral measure µ Indeed, for a(z) = n=1 a nz n we have (a(z) + 1)(u(z) + 1) = 1 Therefore 1 + a(z) + a(z) = 1 u(z) 2 u(z) By the Fatou theorem lim r 1 [1 + a(reiθ ) + a(re iθ )] = dµ dθ Therefore µ is singular if and only if u(e iθ ) 1 almost everywhere Acknowledgements The first named author wishes to acknowledge that this work was done during his fellowship stay at the University of Wrocław in He would like to thank warmly Prof E Damek and Prof R Szwarc for their hospitality References [1] S Kaczmarz, Approximate solution of systems of linear equations, Bull Acad Polon Sci Lett A 35 (1937), (in German); English transl: Internat J Control 57 (1993), [2] S Kwapień and J Mycielski, On the Kaczmarz algorithm of approximation in infinitedimensional spaces, Studia Math 148 (2001), Institute of Pure Mathematics Faculty of Mathematics and Computer Science University of Tartu Ülikooli Tartu, Estonia RainisHaller@utee Institute of Mathematics University of Wrocław Pl Grunwaldzki 2/ Wrocław, Poland szwarc@mathuniwrocpl Received July 21, 2004 Revised version February 24, 2005 (5460)
Recall that any inner product space V has an associated norm defined by
Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationSpectral theory for compact operators on Banach spaces
68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More informationSymmetric and self-adjoint matrices
Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationHILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define
HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationN-WEAKLY SUPERCYCLIC MATRICES
N-WEAKLY SUPERCYCLIC MATRICES NATHAN S. FELDMAN Abstract. We define an operator to n-weakly hypercyclic if it has an orbit that has a dense projection onto every n-dimensional subspace. Similarly, an operator
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationOrthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That
More information4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan
The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationON k QUASI CLASS Q OPERATORS (COMMUNICATED BY T. YAMAZAKI)
Bulletin of Mathematical Analysis and Applications ISSN: 181-191, URL: http://www.bmathaa.org Volume 6 Issue 3 (014), Pages 31-37. ON k QUASI CLASS Q OPERATORS (COMMUNICATED BY T. YAMAZAKI) VALDETE REXHËBEQAJ
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationMATH 22A: LINEAR ALGEBRA Chapter 4
MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationTOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :
TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationUniversity of Missouri Columbia, MO USA
EXISTENCE AND CONSTRUCTION OF FINITE FRAMES WITH A GIVEN FRAME OPERATOR PETER G. CASAZZA 1 AND MANUEL T. LEON 2 1 Department of Mathematics University of Missouri Columbia, MO 65211 USA e-mail: casazzap@missouri.edu
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationSpectral theorems for bounded self-adjoint operators on a Hilbert space
Chapter 10 Spectral theorems for bounded self-adjoint operators on a Hilbert space Let H be a Hilbert space. For a bounded operator A : H H its Hilbert space adjoint is an operator A : H H such that Ax,
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationPatryk Pagacz. Characterization of strong stability of power-bounded operators. Uniwersytet Jagielloński
Patryk Pagacz Uniwersytet Jagielloński Characterization of strong stability of power-bounded operators Praca semestralna nr 3 (semestr zimowy 2011/12) Opiekun pracy: Jaroslav Zemanek CHARACTERIZATION OF
More informationMATH Linear Algebra
MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationA von Neumann Wold decomposition of two-isometries
Acta Sci. Math. (Szeged) 70 (2004), 715 726 A von Neumann Wold decomposition of two-isometries Anders Olofsson Communicated by L. Kérchy Abstract. We present a von Neumann Wold decomposition of a twoisometric
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationMA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS
MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets
FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be
More information1 Functional Analysis
1 Functional Analysis 1 1.1 Banach spaces Remark 1.1. In classical mechanics, the state of some physical system is characterized as a point x in phase space (generalized position and momentum coordinates).
More informationA BRIEF INTRODUCTION TO HILBERT SPACE FRAME THEORY AND ITS APPLICATIONS AMS SHORT COURSE: JOINT MATHEMATICS MEETINGS SAN ANTONIO, 2015 PETER G. CASAZZA Abstract. This is a short introduction to Hilbert
More informationOn Riesz-Fischer sequences and lower frame bounds
On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this
More informationLinear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationFredholmness of Some Toeplitz Operators
Fredholmness of Some Toeplitz Operators Beyaz Başak Koca Istanbul University, Istanbul, Turkey IWOTA, 2017 Preliminaries Definition (Fredholm operator) Let H be a Hilbert space and let T B(H). T is said
More informationChapter 6 Inner product spaces
Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More informationBoolean Inner-Product Spaces and Boolean Matrices
Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationHilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality
(October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are
More informationINVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia
INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS Quanlei Fang and Jingbo Xia Abstract. Suppose that {e k } is an orthonormal basis for a separable, infinite-dimensional Hilbert
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationThe Spectral Theorem for normal linear maps
MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question
More informationINVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas
Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationj=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.
LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationA 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2
46 MATH 223A NOTES 2011 LIE ALGEBRAS 11. Classification of semisimple Lie algebras I will explain how the Cartan matrix and Dynkin diagrams describe root systems. Then I will go through the classification
More informationMath 321: Linear Algebra
Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationOn Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time
On Spectral Factorization and Riccati Equations for Time-Varying Systems in Discrete Time Alle-Jan van der Veen and Michel Verhaegen Delft University of Technology Department of Electrical Engineering
More informationNotes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert. f(x) = f + (x) + f (x).
References: Notes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert Evans, Partial Differential Equations, Appendix 3 Reed and Simon, Functional Analysis,
More informationCHAPTER 3. Hilbert spaces
CHAPTER 3 Hilbert spaces There are really three types of Hilbert spaces (over C). The finite dimensional ones, essentially just C n, for different integer values of n, with which you are pretty familiar,
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationFunctional Analysis Exercise Class
Functional Analysis Exercise Class Week: January 18 Deadline to hand in the homework: your exercise class on week January 5 9. Exercises with solutions (1) a) Show that for every unitary operators U, V,
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationSpectral Measures, the Spectral Theorem, and Ergodic Theory
Spectral Measures, the Spectral Theorem, and Ergodic Theory Sam Ziegler The spectral theorem for unitary operators The presentation given here largely follows [4]. will refer to the unit circle throughout.
More informationCONSTRUCTIVE PROOF OF THE CARPENTER S THEOREM
CONSTRUCTIVE PROOF OF THE CARPENTER S THEOREM MARCIN BOWNIK AND JOHN JASPER Abstract. We give a constructive proof of Carpenter s Theorem due to Kadison [14, 15]. Unlike the original proof our approach
More informationReview of linear algebra
Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of
More informationReal Analysis Notes. Thomas Goller
Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationA MOMENT PROBLEM FOR POSITIVE MEASURES ON THE UNIT DISC AHARON ATZMON
PACIFIC JOURNAL OF MATHEMATICS Vol. 59, No. 2, 1975 A MOMENT PROBLEM FOR POSITIVE MEASURES ON THE UNIT DISC AHARON ATZMON A necessary and sufficient condition on an infinite matrix (tfm,n)m,π=o is given
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationSpanning and Independence Properties of Finite Frames
Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames
More informationPROBLEMS. (b) (Polarization Identity) Show that in any inner product space
1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization
More informationDecomposition of Riesz frames and wavelets into a finite union of linearly independent sets
Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame
More informationI teach myself... Hilbert spaces
I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition
More informationErgodic Theory and Topological Groups
Ergodic Theory and Topological Groups Christopher White November 15, 2012 Throughout this talk (G, B, µ) will denote a measure space. We call the space a probability space if µ(g) = 1. We will also assume
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationE2 212: Matrix Theory (Fall 2010) Solutions to Test - 1
E2 212: Matrix Theory (Fall 2010) s to Test - 1 1. Let X = [x 1, x 2,..., x n ] R m n be a tall matrix. Let S R(X), and let P be an orthogonal projector onto S. (a) If X is full rank, show that P can be
More informationarxiv:math.oa/ v1 22 Nov 2000
arxiv:math.oa/0011184 v1 22 Nov 2000 A module frame concept for Hilbert C*-modules Michael Frank and David R. Larson Abstract. The goal of the present paper is a short introduction to a general module
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More information