Irregular Sampling, Frames. T. Strohmer. Department of Mathematics, University of Vienna. Strudlhofg. 4, A-1090 Vienna, AUSTRIA

Size: px
Start display at page:

Download "Irregular Sampling, Frames. T. Strohmer. Department of Mathematics, University of Vienna. Strudlhofg. 4, A-1090 Vienna, AUSTRIA"

Transcription

1 Irregular Sampling, Frames and Pseudoinverse T. Strohmer Department of Mathematics, University of Vienna Strudlhofg. 4, A-9 Vienna, AUSTRIA

2 Contents. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.2 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 FRAMES IN HILBERT SPACES 5. Denitions and general results : : : : : : : : : : : : : : : : : : 5.2 Frames and bases : : : : : : : : : : : : : : : : : : : : : : : : : 5.3 Weyl-Heisenberg frames : : : : : : : : : : : : : : : : : : : : : 8.4 Ane frames : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2 PROJECTIONS AND PSEUDOINVERSE 2 2. Least Squares Approimations and Projections : : : : : : : : : Projection matrices : : : : : : : : : : : : : : : : : : : : : : : : Pseudoinverse and the Singular Value Decomposition : : : : : Fast Computation of the Pseudoinverse : : : : : : : : : : : : : Singular Values and Frames : : : : : : : : : : : : : : : : : : : 3 3 RECONSTRUCTION OF BAND-LIMITED SIGNALS FROM IRREGULAR SAMPLES Introduction and notation : : : : : : : : : : : : : : : : : : : : Iterative reconstruction algorithms : : : : : : : : : : : : : : : The Marvasti-method : : : : : : : : : : : : : : : : : : : The Sauer-Allebach-Algorithm : : : : : : : : : : : : : : The Adaptive Weights Method : : : : : : : : : : : : : The Frame approach : : : : : : : : : : : : : : : : : : : : : : : 5

3 3.4 Matri iteration methods : : : : : : : : : : : : : : : : : : : : : Pseudoinverse matri methods : : : : : : : : : : : : : : : : : : POCS Method : : : : : : : : : : : : : : : : : : : : : : : : : : Oakley method : : : : : : : : : : : : : : : : : : : : : : : : : : Comparison of methods : : : : : : : : : : : : : : : : : : : : : 72 2

4 . Introduction It is the purpose of this report, to point out the connections between the theory of frames in Hilbert spaces, the theory of pseudoinverse operators and the irregular sampling problem for band-limited functions. The report is subdivided into three chapters. The rst one is dealing with frames, which have gained in signicance in connection with wavelet theory. We give a short review of several results and introduce useful tools such as the frame operator. Various eamples complement this section. Chapter two is devoted to linear algebra. Projection matrices, mean square solutions and singular value decomposition are used to dene the pseudoinverse of a matri. The pseudoinverse (or Moore-Penrose inverse) is a generalization of the ordinary inverse of a matri. The third chapter discusses the one-dimensional discrete reconstruction problem for band-limited functions from irregulary spaced sampling points. We describe some new facts about iterative algorithms and highlight connections between the frame approach, iterative algorithms and pseudoinverse methods. We point out a connection between the POCS method and pseudoinverse matrices. The Direct Fourier method, fast matri iteration algorithms as well as the Oakley method are presented. Some tricks to speed up computations for matri methods and to enhance convergence rates for iterative methods are given. The last section of this chapter deals with numerical results. We give indications about duration of computation and precision of the reconstruction for dierent methods and illustrate the behavior of the methods with many graphics. 3

5 .2 Notation We write IR for the set of real numbers and IR m for the m-dimensional space of real numbers, ZZ for the set of integers and IN for the natural numbers. All series and sequences with undened limits are taken over ZZ. L 2 (IR) is the Hilbert space of all comple-valued, square-integrable functions f on IR with norm Z =2 kfk 2 := IR jf()j2 d : The inner product of f; g 2 L 2 (IR) is hf; gi := Z IR f()g() d: The Fourier-transform ^f of an integrable f is given by Ff(t) = ^f(t) := Z IR m f()e2it d We write F for the inverse Fourier transformation, dened by F ^f(t) = f(t) := ZIR m ^f(y)e 2iyt dy: The Discrete Fourier transform of a vector = ( ; ; : : : ; n ) is given by Given a function f we dene: (F) k := n X j= j e 2ijk=n : Translation : T a f() = f( a); for a 2 IR; Modulation : E a f() = e 2ia f(); for a 2 IR; Dilation : D a f() = jaj 2 f(=a); for a 2 IR n fg: Further f () = f() and f? = f(): 4

6 Chapter FRAMES IN HILBERT SPACES. Denitions and general results DEFINITION..: A basis fe n g n2i is called Orthonormal basis, if e n and e m are orthogonal for all n; m 2 I and ke n k = for all n 2 I. Let H be a seperable Hilbert space with norm k:k and inner product h : ; : i. It is a well known fact that for every seperable Hilbert space one can construct an orthonormal basis [26]. If fe n g is an orthonormal basis for H then Parseval's equation holds: X n jh; e n ij 2 = kk H: One of the major properties of such sequences fe n g is that they provide a \decomposition" of H: = X n h; e n ie n = X n c n e n 8 2 H with unique scalars c n : Actually this equation is equivalent to Parseval's formula [25]. Orthonormal bases are often dicult to nd or inconvenient to work with. Sometimes we want the e n to be easily and numerically stable generated. The Gram- Schmidt orthogonalization routine for eample is numerically fairly instable. The result depends essentially on the order of the original nonorthogonal sequence of vectors. 5

7 Frames are an alternative to and generalization of orthonormal bases. By giving up the requirements of orthogonality and uniqueness of decomposition we get more freedom in the choice of the fe n g,but we still retain good control on the behavior of the c n and the ability to decompose the space. Frames were rst introduced by Dun and Schaeer in connection with nonharmonic Fourier series [4]. For an eposition of this and related material in the contet of nonharmonic Fourier series see also the monograph of R. Young [5]. DEFINITION..2: A frame in a Hilbert space H is a system of functions ff n g n2i such that we have for two constants < A B X Akfk 2 jhf; f n ij 2 Bkfk 2 8f 2 H: n2i The optimal constants A, B are called the frame bounds. A frame is tight if A = B. We say a frame is eact if it is no frame any longer whenever a single element is taken away from the sequence. Since P n jhf; f n ij 2 is a series of positive numbers, it converges absolutely, hence unconditionally. That means, every rearrangement of the sum also converges and converges to the same value. Therefore every permutation of a frame is also a frame, and all sums involving frames converge unconditionally. The following eamples show on the one hand that every orthonormal basis is a frame, but not all frames are orthonormal bases and on the other hand it is demonstrated that tightness and eactness are not related. EXAMPLE..3: Let ff n g n= be an orthonormal basis for a Hilbert space H.. ff n g is a tight eact frame for H with bounds A = B =. 2. ff ; f ; f 2 ; f 2 ; f 3 ; f 3 ; : : :g is a tight ineact frame with bounds A = B = 2. Obviously it is not an orthonormal basis for H, but it does contain an eact frame. 3. f2f ; f 2 ; f 3 ; : : :g is a nontight eact frame with bounds A = ; B = ff ; f 2 ; f ; : : :g is a complete orthogonal sequence in H, but not a frame. 5. I. Daubechies gave the eample shown by g.. for a \good" frame in IR 2. One can see, that linear dependence between the three vectors (which have all the same length) is as small as possible. 6

8 2 o 2 o 2 o Figure.: REMARK..4: Obviously frames are complete since if f 2 H and hf; f n i = for all n, then Akfk 2 P n2i jhf; f n ij 2 =, so kfk =, thus f =. A simple conclusion from Remark..4 is that any Hilbert space which possesses a countable frame must be seperable, for the set of all nite linear combinations of the f n with rational coecients (i.e., rational real and imaginary parts) will be a countable dense subset of H. Of course, every seperable Hilbert space does possess at least one frame that is an orthonormal basis. Frames are not only complete, but they provide a description of the whole Hilbert space H, as shown in the following two theorems. Given operators S; T : H! H we write S T if hsf; fi ht f; fi8f 2 H, and we denote by I the identity map on H, i.e., If = f 8f 2 H. THEOREM..5: [4] [24] Given a sequence ff n g in a Hilbert space H, the following two statements are equivalent:. () ff n g is a frame with bounds A; B. 2. (2) Sf = P nhf; f n if n is a bounded linear operator with AI S BI, called the frame operator for ff n g. Moreover, the series P nhf; f n if n converges unconditionally. 7

9 PROOF: () ) (2). S is well-dened and continuous since ksfk 2 = sup jhsf; gik 2 kgk= = sup kgk= sup kgk= X 2 hf; f n i hf n ; gi n! X X jhf; f n ik 2 n n sup Bkfk 2 Bkgk 2 kgk= = Bkfk 2 : jhf n ; gik 2! The relations AI S BI follow immediately from the denition of frames. It is clear, that S is a positive operator. (2) ) (). Assume that (2) holds, then we have so the result follows. haif; fi hsf; fi hbif; fi 8f 2 H: But hif; fi = kfk 2 and hsf; fi = X n hf; f n if n ; f + = X n jhf; f n ij 2 ; A mapping T : H! H is said to be invertible or a topological isomorphism if T is linear, bijective, continuous and T is continuous. LEMMA..6: (cf. [25], [26]). Let S : H! H be a linear operator with ksk <, then we have (I S) = P j= S j (This is the well known C.Neumann's series). THEOREM..7: [4]. S is invertible and B I S A I. 2. fs f n g is a frame with bounds =B; =A, called the dual frame of ff n g. 3. Every f 2 H can be written as f = X n hf; S f n if n = X n hf; f n is f n : The rst epansion is called the frame epansion and the second is the dual frame epansion. 8

10 PROOF: () AI S BI ) A B I B S I ) I B S I I A B ) ki B Sk ki( A B )k = A B < : Lemma..6 implies that S is invertible, so S itself must be invertible. B Since S is a positive operator and commutes both with I and S, we can multiply the equation by S in the equation AI S BI to obtain B I S A I. (2) Since S is positive, it is self-adjoint. Therefore X n X hf; S f n is f n = hs f; f n is f n n! X = S hs f n ; fif n n = S S(S f) = S f: The result then follows from the fact that B I S A I and Theorem..5 part 2. (3) We use again the matter of fact that S is self-adjoint. f = S(S f) = X n hs f; f n if n = X n hf; S f n if n : S is self-adjoint too, so the formula proved in part (2) gives f = S (Sf) = X n hsf; S f n is f n = X n hf; SS f n is f n == X n hf; f n is f n : COROLLARY..8: [22] If ff n g is a tight frame ( that means A = B ), then. S = AI 2. S = A I 9

11 3. Every f 2 H can be epressed as f = A P nhf; f n if n PROOF: If A = B then Theorem..5 implies that AI S AI, so S = AI and S = A I. Using now Theorem..7 we see that (3) holds. As proved in Theorem..7 eists and equals ki Sk BA B for = B, so S S = X j= (I S) j The precise value of is not crucial, the series converges to S for all values of between and 2=kSk (cf. [4], [2]). COROLLARY..9: (cf. [2]). A faster rate of convergence for the series can be obtained, if one replaces B by S = 2 A + B X j= 2 A+B : (I 2 A + B S)j PROOF: Remember that AI S BI. So we can write (A + B)I 2BI (A + B)I 2S (A + B)I 2AI ) (A B)I (A + B)I 2S (B A)I ) ki 2 A + B Sk B A B + A < ) S = 2 A + B X j= (I 2 A + B )j We have proved that the series still converges, if we replace B by 2 A+B. The convergence is faster if ki 2 A + B Sk ki B Sk: This inequality holds, if B 2, which is true for all A B, because A+B B = 2 B + B 2 A + B

12 Theorem..7 and the representation f = S S allow to describe f as limit of a sequence of partial sums, where f (k) is given by f (k) = kx j= (I S) j Sf at least for all values of gamma between and 2=kSk [2]. An iterative description of this approimation algorithm is possible and is useful for practical applications. THEOREM..: (cf. [9], [5]). The approimation of f by the sequence of partial sums f (k) = kx j= (I S) j Sf can be replaced by following iterative algorithm f (k) = Sf + (I S)f (k) = f (k) + S(f f (k) ): The iteration starts with f () := Sf. Alternatively one may dene f () :=, which implies f () := Sf according to the reconstruction. PROOF: We prove the theorem by induction. The formula is correct for k =, since X f () = (I S) Sf j= = Sf + (I S) Sf : {z } f () Assume now that the theorem holds for some given k 2 IN, we have to show that the formula is correct for k + : f (k+) = k+ X j= (I S) j Sf = Sf + (I S)Sf + (I S) 2 Sf + : : : + : : : + (I S) k+ Sf = Sf + (I S)(Sf + (I S)Sf + : : : + : : : + (I S) k Sf) = Sf + (I S)fk: We had to give up the uniqueness of decomposition of the Hilbert space H by introducing frames instead of orthonormal bases, but not good control on the behavior of the c n as the net theorem shows.

13 THEOREM..: (cf. [4]). Given a frame ff n g and an arbitrary f 2 H. Let a n = hf; S f n i, so f = P n a n f n. If it is possible to nd other scalars c n such that f = P n c n f n then we must have X n jc n j 2 = X n ja n j 2 + X n ja n c n j 2 : PROOF: Note rst that if P n jc n j 2 = then the result is trivial since P n ja n j 2 <. So assume P n jc n j 2 <, and note that hf n ; S fi = hs f n ; fi = hf; S f n i = a n : Therefore hf ; S fi = X n a n f n ; S f + = X n a n hf n ; S fi = X n a n a n = X n ja n j 2 ; and hf ; S fi = X n c n f n ; S f + = X n c n hf n ; S fi = X n c n a n ; hence X n ja n j 2 + X n = X n ja n c n j 2 = ja n j 2 + X n (a n c n (a n c n ) = X n ja n j 2 + X n ja n j 2 X n a n c n X n a n c n + X n jc n j 2 = X n jc n j 2 : REMARK..2: Later we will see, that in many cases f m = f m for ed m is not the best representation of f m. It is better (from the point of view of stability and error analysis) to describe f m by using many of the other vectors f n of the frame. THEOREM..3: [4] [5] The removal of a vector from a frame leaves either a frame or an incomplete set. In fact, hf m ; S f m i 6= ) ff m g n6=m is a frame; hf m ; S f m i = ) ff m g n6=m is incomplete; 2

14 PROOF: Fi m and dene a n = hf m ; S f m i = hs f m ; f m i. We know thatf m = P n a n f n, but we also have f m = P n c n f n where c m = and c n = for n 6= m. By Corollary..8 we therefore have = X n = X n jc n j 2 ja n j 2 + X n ja n c n j 2 X X = ja m j 2 + ja n j 2 + ja m j 2 + ja n j 2 : n6=m n6=m Hence X n6=m ja n j 2 = ja mj 2 ja m j 2 2 < : Suppose now that a m =. Then P n6=m ja n j 2 =, so a n = hs f m ; f n i = for n 6= m. That is S is orthogonal to f n for every n 6= m. But S f m 6= since hs f m ; f m i = a m = 6=. Therefore ff n g n6=m is incomplete in this case. On the other hand, suppose a m 6=. Then f m = a m Pn6=m a n f n, so for f 2 H we have jhf; f m ij 2 = X a n hf; f n i a m n6=m j a m j X n6=m ja n j 2 2 X n6=m jhf; f n ij 2 A Therefore X n X jhf; f n ij 2 = jhf; f n ij 2 + jhf; f n ij 2 n6=m X C jhf; f n ij 2 n6=m where C = + j a m j 2 Pn6=m ja n j 2. Then A C kfk2 C X n X n X jhf; f n ij 2 jhf; f n ij 2 n6=m jhf; f n ij 2 Bkfk 2 so ff n g n6=m is a frame with bounds A C ; B: 3

15 COROLLARY..4: Given a frame ff n g. Then the following is correct for each m: X n6=m jhf m ; S f n ij 2 = jhf m; S f m ij 2 j hf m ; S f m ij 2 : 2 In particular, if hf m ; S f m i =, then hf m ; S f n i = 8 n 6= m. COROLLARY..5: [4] If ff n g is an eact frame then ff n g and fs f n g are biorthogonal, that is ( ; if m = n; hf m ; S f n i = mn = ; if m 6= n: PROOF: If ff n g is eact, then ff n g n6=m is not a frame for any m. Therefore, by Theorem.. we must have hf m ; S f m i = for every m, and hence by Corollary..4 we have hf m ; S f n i = 8n 6= m as desired. Corollary..5 allows us to prove a basic result about frames THEOREM..6: [23] () Frames are norm-bounded above with sup kf n k 2 B: (2) Eact frames are norm-bounded below with A inf kf n k 2 : PROOF: () Fi m, then kf m k 4 = jhf m ; f m ij 2 X n jhf m ; f n ij 2 Bkf m k 2 ) kf m k 2 B: (2) If ff n g is an eact frame then ff n g and fs f n g are biorthogonal by Corollary..5. Therefore we have for ed m AkS f m k 2 X n jhs f m ; f n ij 2 ) kf m k 2 A: = jhs f m ; f m ij 2 ks f m k 2 kf m k 2 REMARK..7: In general one cannot say that if f = P n c n f n then P n jc n j 2 <. As a trivial eample, let ff n g be any frame which includes 4

16 innitely many zero elements and take the coecients of the zero elements to be. LEMMA..8: [22] If ff n g is a frame which is norm-bounded below, then X X jc n j 2 < () c n f n converges unconditionally. n n.2 Frames and bases We have shown in Theorem..7 that frames provide a description of a Hilbert space H, i.e., every f 2 H can be written f = P n c n f n with scalars c n = hf; S f n. We now consider whether these representations are unique, with other words, whether frames are bases for H. DEFINITION.2.: A sequence f n g is a basis for H for every 2 H if there eist unique scalars c n such that = P n c n n. Tha basis is bounded if < inf n k n k sup n k n k < : We call a basis unconditional if the series P n c n n converges uncondidionally for every, i.e., every permutation of the series converges. THEOREM.2.2: [22] Ineact frames are not bases. PROOF: Assume ff n g is an ineact frame with frame operator S. Then, per denitionem, ff n6=m g is a frame for some m. Dene a n = hf m ; S f n i so by Theorem..7 we have f m = P n a n f n. But we also have f m = P n c n f n, where c m = and c n = for n 6= m. By Theorem..3 we must have a m 6=, so these are two dierent representations of f n. Therefore f n is not a basis for H: THEOREM.2.3: Let H ; H 2 be Hilbert spaces, and let ff n g be a frame for H with bounds A; B and frame operator S. Assume T : H! H 2 is a topological isomorphism, then ft f n g is a frame for H 2 with bounds AkT k 2 ; BkT k 2 and frame operator T ST 3. PROOF: 8 g 2 H 2 : T ST 3 g = T X n ht 3 g; f n if n! = X n hg; T f n it f n : 5

17 So there is only left to show, that Therefore let us consider AkT k 2 I T ST 3 BkT 3 gk 2 I: 8g 2 H 2 : ht ST 3 g; gi = hst 3 g; T 3 gi ) AkT 3 gk 2 ht ST 3 g; gi BkT 3 gk 2 Finally, T is a topological isomorphism, so kgk kt k = Combining these inequalities, we nd that kgk kt 3 k kt 3 gk kt 3 kkgk = kt kkgk: Akgk 2 kt k 2 ht ST 3 g; gi BkT k 2 kgk 2 COROLLARY.2.4: ft f n g is eact, ff n g is eact. PROOF: The result follows from the fact that T is a topological isomorphism ( so T preserves completeness respectively incompleteness of sets ) and Theorem..3. We have seen that ineact frames are no bases, now let us turn to eact frames. Actually here we get a positive characterization by bases. THEOREM.2.5: [4], [5] A sequence ff n g in a Hilbert space H is an eact frame for H if and only if it is a bounded unconditionally basis for H. PROOF: ): Assume ff n g is an eact frame. We have seen in Corollary..5 that frames are bounded in norm, so there is only left to show that ff n g is an unconditional basis. Every f 2 H can be epressed by f = P nhf; S f n if n and this series converges unconditionally ( see Theorem..7 and Lemma..5 ). It remains to show that this representation is unique: If f = X n ) hf; S f m i = X n c n f n c n hf; S f m i = c m 6

18 since ff n g and fs f n g are biorthogonal ( Corollary..5 ). (: Assume ff n g is a bounded unconditional basis for H. Then there is an orthonormal basis fe n g and a topological isomorphism U : H! H such that Ue n = f n 8n: Therefore we have 8f 2 H: But X n X jhf; f n ij 2 = jhf; Ue n ij 2 X n = jhu 3 f; e n ij 2 n = ku 3 fk 2 : ku 3 k kfk ku 3 fk ku 3 kkfk; so ff n g forms a frame for H. Obviously the frame is eact, because the removal of any vector of a basis leaves an incomlete set. REMARK.2.6: [5] Obviously an eact frame ff n g is equivalent to a Riesz-basis (i.e. a basis, which can be obtained from an orthonormal basis by means of a bounded invertible operator). COROLLARY.2.7: [22] Given a frame ff n g, the following statements are equivalent:. ff n g is eact; 2. hf n ; S f n i = 8n; 3. ff n g and fs f n g are biorthogonal. PROOF: Follows from Theorem..3 and Corollary..5. COROLLARY.2.8: [22] Given a tight frame ff n g with bounds A = B, the following statements are equivalent:. ff n g is eact; 2. ff n g is an orthogonal set; 3. kf n k 2 = A 8n. PROOF: Follows from Corollary.2.7 and the fact that S = AI: THEOREM.2.9: [] If ff n g is a tight frame for H with bounds A = B = and kf n k = for all n then ff n g is an orthonormal basis of H. 7

19 .3 Weyl-Heisenberg frames DEFINITION.3.: [22] Given g 2 L 2 (IR) and a; b >, we say that (g; a; b) generates a Weyl-Heisenberg frame for L 2 (R) if ft na E mb gg m;n2z is a frame for L 2 (IR). The numbers a and b are the frame parameters, more precisely: a is the shift parameter and b is the modulation parameter. In terminology of wavelet theory, the function g may be called the mother wavelet. Weyl-Heisenberg frames are also designated as Gabor frames. THEOREM.3.2: [3] Assume g 2 L 2 (IR) and a; b > satisfy:. There eist constants A; B such that X < A = ess inf jg( na)j 2 2IR n X ess sup jg( na)j 2 = B < : 2IR n 2. g has compact support, with supp(g) I IR, where I is some intervaol of length =b. Then (g; a; b) generates a Weyl-Heisenberg frame for L 2 (IR) with frame bounds b A; b B. COROLLARY.3.3: [22] If (g; a; b) generates a W-H frame for L 2 (IR), then (^g; a; b) generates a W-H frame for L 2 ( IR). ^ PROOF: Follows immediately from (T na E mb g^) = E na T mb^g and from the Theorem of Plancherel..4 Ane frames DEFINITION.4.: [22] We dene H 2 + (IR) = ff 2 L2 (IR) : supp( ^f) [; )g; H 2 (IR) = ff 2 L 2 (IR) : supp( ^f) (; ]g: These are Hilbert spaces with the same inner products as the inner product of the Hilbert space L 2 (IR) and with norms kfk H 2 + = Z j ^f()j =2 Z d 2 and kfk H 2 = j ^f()j =2 d 2 : 8

20 Moreover H( 2 IR) and H+( 2 IR) are closed subspaces of the Hilbert space L 2 (IR) and each is the orthogonal complement of the other. DEFINITION.4.2: [22] Given g 2 H+(IR); 2 a > and b > we say that (g; a; b) generates an ane frame for H+(IR), 2 if fd a nt mb gg m;n2zz is a frame for H+(IR). 2 The number b is called the shift parameter and a is the dilation parameter. Togehter they are designated the frame parameters, Analogous to the Weyl-Heisenberg frames, in connection with wavelet theory the function g is called the mother wavelet. THEOREM.4.3: [3] Assume g 2 L 2 (IR) satises supp(^g) [l; L]; where l < L <, and let a > ; b > such that:. There eist constants A; B such that < A = ess inf 2. (2) (L l) =b: ess sup X n X n j^g(a n )j 2 j^g(a n )j 2 = B < ; Then fd a nt mb gg is a frame for H 2 + (IR) with frame bounds b A; b B. An analogous result holds for H 2 (IR). It is easy to see that if we combine a frame for H 2 (IR) with one for H 2 + (IR), then we obtain a frame for L2 (IR). THEOREM.4.4: [22] Assume g ; g 2 2 L 2 (IR) satisfy supp(^g ) [L; l] and supp(^g 2 ) [l; L], where l < L <, and let a > ; b > satisfy:. There eist constants A; B such that ( X < A = min ess inf j^g (a n )j 2 ; ess inf ma ( ess sup 2. (2) (L l) =b: X n n j^g (a n )j 2 ; ess sup X n X n j^g 2 (a n )j 2 ) j^g 2 (a n )j 2 ) = B < ; Then the collection of functions fd a nt mb g ; D a nt mb g 2 g is a frame for L 2 (IR) with frame bounds b A; b B. 9

21 Chapter 2 PROJECTIONS AND PSEUDOINVERSE 2. Least Squares Approimations and Projections Let M be a (m 2 n) matri with m > n (for eample m represents the number of observations in an eperiment and n the number of unknowns that should be measured in this eperiment). Let be a (n 2 ) vector and b a (m 2 ) vector (to stick to the eample above b represents the data given by an eperiment). It must be epected that the system M = b will be inconsistent. Probably there is no choice of that perfectly ts the data b, or in other words, probably the vector b will not be a linear-combination of the columns of M. One possibility to \solve" this inconsistent system is to determine from a part of the system and ignore the rest, but this is hard to justify if all m equations come from the same source. Rather than epecting no error in some equations and large errors in others, it is more reasonable to choose so as to minimize the average error in the m equations. There are many ways to dene such an average, but the most convenient is to use the sum of squares, this is nothing than the distance of from b to the point M in the column space of M an is wellknown as the norm km bk 2. Therefore searching for the least square solution, which will minimize the error, is the same as locating the point p = M that is 2

22 b = b b 2 b 3 A - b column = a a 2 a 3 column space p = A column 2 = a 2 a 22 a 32 Figure 2.: Projection onto the column space of a 3 by 2 matri closer to b than any other point in the column space. Geometrically p must be the projection of b onto the column space of M, and the error vector M b must be perpendicular to that space (Fig. 2.). This perpendicularity to a space is epressed as follows. Each vector in the column space of M is a linear combination of the columns, with some coecients y ; : : : ; y n. In other words it is a vector of the form My. For all choices of y, these vectors in the plane must be perpendicular to the error vector M = b: (My) T (M b) = or y T (M T M M T b) = This is true for every y, and there is only one way in which it can happen: The vector in brackets has to be the zero vector, M T M M T b =. (We denote by M T the transpose of a matri M in the real case, respectively the adjoint transpose in the comple case.) The geometry has led us directly to the fundamental equations of least squares theory: THEOREM 2..: The least squares solution to an inconsistent system M = b of m equations in n unknowns satises M T M = M T b 2

23 These are known as the normal equations. If the columns of M are linearly independent,then the matri M T M is invertible and the unique least squares solution is = (M T M) M T b: The projection of b onto the colum,n space is therefore p = M = M(M T M) M T b: REMARK 2..2: The matri (M T M) M T is one of the left-inverses of M : ((M T M) M T M = I. Such a left-inverse is guaranteed because the columns of M are linearly independent. 2.2 Projection matrices DEFINITION 2.2.: P is called a projection matri, if P satises following two properties: () P is idempotent : P 2 = P; (2) P is symmetric : P = P T : Our computations have shown that the closest point to b is p = M(M T M) M T b. The matri that describes this construction will be denoted by P and is given by: P = M(M T M) M T : THEOREM 2.2.2: If P is of the form P = M(M T M) M T, then P is a projection matri (cf. [45]). PROOF: P is idempotent: P is symmetric: P 2 = M(M T M) M T M(M T M) M T {z } I = M(M T M) M T = P: P T = (M T ) T ((M T M) ) T M T = M((M T M) T ) M T = M(M T M) M T = P: 22

24 We can say p = P p is the component of b in the column space, and the error b P b is the component in the orthogonal complement. In other words I P is also a projection matri, it projects any vector b onto the orthogonal complement, and the projection is (I P )b = b P b. In short, we have a matri formula for splitting a vector into two perpendicular components:p b is in the column space R(M), and the other component (I P )b is in the left nullspace N (M T ), which is the orthogonal complement of the column space (an analogous relation holds for the row space R(M T ) and the nullspace N (M T ), a result which we use later) 2.3 Pseudoinverse and the Singular Value Decomposition What is the optimal solution to an inconsistent system M = b? That is the key question in this chapter, and it is not yet completely answered. Our goal is to nd a rule that species, given any coecient matri M and any right side b. We know several equivalent conditions for the equation M = p to have only one solution:. The column space of M is linearly independent. 2. The nullspace of M contains only the zero vector. 3. The rank of M is n 4. The square matri M T M is invertible. In such a case the only solution to M = p = P b is = (M T M) M T b: But when the conditions () (4) do not hold, and is not uniquely determined by M = p? A good tool to manage this situation is the pseudoinverse M +. We have to choose one of the many vectors that satisfy M = p, and the choice will be, per denitionem, the optimal solution = M + b to the inconsistent system M = b. 23

25 The choice is made according to the following rule: The optimal solution, among all solutions of M = p, is the one that has the minimum length. How can we nd it? Remember that the row space and the null space of M are orthogonal complements in IR n. This means that any vector can be split into two perpendicular pieces, its projection onto the row space and its projection onto the nullspace. Suppose we apply this splitting to one of the solutions ( call it ) on the equation M = p. Then = r + w, where r is in the row space and w is in the nullspace. Now there are three important points:. The component r is itself a solution of M = p, since: Mw = ) M r = M( r + w) = M = p: 2. All solutions of M = p share the same component r in the row space, and dier only in the nullspace component w. 3. The length of such a solution r + w obeys Pythagoras' law, since the two components are orthogonal: k r + wk 2 = k r k 2 + kwk 2 Our conclusion is this: The solution that has the minimum length is r. We should choose the nullspace component to be zero, leaving a solution that is entirely in the row space. These considerations cause us to give following denition. DEFINITION 2.3. (cf. [45] The optimal least squares solution of any system M = b is the vector r (or if we return to our original notation ) which is determined by two conditions:. M equals the projection of b onto the column space of M. 2. lies in the row space of M. The only matri \solving" M = b and satisfying these two conditions is the pseudoinverse M +, dened by = M + b. Now we conclude some basic properties of the pseudoinverse. M + is also called the Moore - Penrose inverse and was originaly dened by Penrose in a completely dierent way. He proved that for any M there is only one matri M + 24

26 COROLLARY 2.3.2: (cf. [45]).. M + is an (n2m) matri, it starts with the vector b 2 IR m, and produces the vector 2 IR n. 2. The column space of M + is the row space of M, and the row space of M + is the column space of M. 3. rank M + = rank M. 4. (M + ) + = M, i.e the pseudinverse of M + is M itself. 5. If M is invertible, then M + = M. 6. (M T ) + = (M + ) T. EXAMPLE 2.3.3: M = 2 with > ; 2 >. The column space is the y plane, so that projecting b annihilates its z component: p = P b = (b ; b 2 ; ) T. The equation M = p becomes 2 C A B 2 C 3 A = C b 2 A 4 The rst two components are determined: = b = ; 2 = b 2 = 2 : The other two components must be zero, by the requirement of minimum length, and therefore the optimal solution is: = b = b 2 = 2 C A or M + b = C A 2 C B b b 2 b 3 C A : satisfying the four conditions: () MM + M = M (2) M + MM + = M + (3) (MM + ) T = MM + (4) (M + M ) T = M + M 25

27 This eample is typical of a whole family of special matrices, namely those with positive numbers ; : : : ; r in the rst r entries on the main diagonal, and zeros everywhere else. Denoting this matri by 6, its pseudoinverse is computed as in the eample: 6 = r C A and 6 + = r C A If 6 is (m 2 n), then 6 + is (n 2 m). It is easy to see that the rank of 6 + equals the rank of 6 which is r,and that the pseudoinverse brings us back to 6 : (6 + ) + = 6. One reason that these simple matrices of the form like 6 are so important rises from a new way to factorize the matri M. It is the so-called singular value decomposition. Since this decomposition makes use of orthogonal matrices, we have to give following dention and remark: DEFINITION 2.3.4: Given a (n2n) square matri Q and let q ; : : : ; q n be its columns. Q is an orthogonal matri if it has orthonormal columns, that is: ( qi T ; if i = j; giving the normalization q j = ; if i 6= j; giving the orthogonality REMARK 2.3.5:. An orthogonal matri has the following properties (a) (b) (c) Q T Q = I; QQ T = I; Q T = Q : 2. Not only does Q have orthonormal columns, as required by the denition of an orthogonal matri, but also its rows are orthonormal. 26

28 3. Multiplication by an orthonormal matri Q preserves lengths: kqk = kk for every vector and it also preserves inner products: (Q) T (Qy) = T y for every vector THEOREM 2.3.6: [45] Any (m 2 n) matri can be factored into M = Q 6Q T 2 where Q is an (m 2 m) orthogonal matri, Q 2 is an (n 2 n) orthogonal matri, and 6 has the special diagonal form described above. PROOF: To prove the singular value decomposition we make use from the fact that for a symmetric matri like M T M there is an orthonormal set of eigenvectors ; : : : ; n (which form the columns of Q 2 ): M T M i = i i with T i i = and T j i = for j 6= i: () Now we take the inner product with i : T i M T M i = i T i i () km i k 2 = i (2) =) i 8 i: Suppose that ; : : : ; r are strict positive, and the remaining n r of the M i and i are zero. For the strict positive ones we set i = p i and y i = M i = i, the other which are equal to zero are not changed. The y i are unit vectors (see (2)), and they are orthogonal ( by ()): y T i y i = T j M T M i j i = i T j i j i = for j 6= i Therefore we have r orthonormal vectors, and by the Gram-Schmidt Orthogonalization routine they can be etended to a full orthonormal basis y ; : : : ; y r ; : : : ; y m which is used in the columns of Q. Then the entries of Q T MQ 2 are the numbers y j T M i, and they are zero whenever i > r, because then M i =. Otherwise they equal yj T iy i, which is zero for j 6= i and i for j = i. In other words, Q T MQ 2 is eactly the special matri 6 with the i along its main diagonal, and hence M = Q 6Q T 2. The rotations Q and Q 2 just swing the column space of M into line with the row space, and M becomes the diagonal matri 6. 27

29 The numbers i are named the singular values of M. This singular value decomposition of M leads immediately to an eplicit formula for the pseudoinverse of M, something that has so far been lacking. In this special case ( but not in general, see below, Lemma 2.4. ) it is possible, to \pseudoinvert" the three matrices separately, and multiply in the usual reverse order, since the inverse of an orthogonal is its transpose ( see Remark ), we get following formula: THEOREM 2.3.7: Given any matri M. The pseudoinverse M + is given by: M + = Q Q T PROOF: This formula can be proved directly from the least squares principle. Multiplication by the orthogonal matri Q T leaves the length unchanged, by Remark 2.3.6, so the error to be minimized is km bk = kq 6Q T 2 bk = k6qt 2 QT bk Introduce the new unknown y = Q T = 2 Q 2, which has the same length as. Then we want to minimize k6y Q T bk, and the optimal solution y ( the shortest of all vectors that are minimizing ) is y = 6 + Q T b: Therefore = Q 2 y = Q Q T b or M + = Q Q T : Now we have a nice formula but the snag is that the singular value decomposition requires us to nd the orthogonal matrices Q and Q 2, and that is often impossible to do by row operations. But we do not have to give up. The way out is to try to compute directly from the Gaussian factors L and U. The result is just like the U L factorization of A. Since the interested reader can nd a good description of the Gaussian factorization in [45], we only mention the essential facts: Let M be a (m 2 n) matri of rank r and M = LU its triangular decomposition, the last m r rows of U are all zero. Suppose we throw them away to produce an (r 2 n) matri U. In the matri multiplication M = LU, the last m r columns of L are only multiplying those zero rows at the bottom of U. Therefore, we also throw away those columns of L, leaving L. The product is still the same. Summing up, we obtain following theorem. THEOREM 2.3.8: [45] The (m2n) matri M of rank r has a factorization into a (m 2 r) matri L times a (r 2 n) matri U, i.e. M = L U 28

30 REMARK 2.3.9:. L has rank r and the same column space as M. 2. It is no surprise that U has also rank r and has the same row space as M. With this new facorization M = L U we can nally give an eplicit formula to calculate M +. THEOREM 2.3.: [45] The pseudoinverse of any (m 2 n) matri M is given by M + = U T ( U U T ) ( L T L) L T 2.4 Fast Computation of the Pseudoinverse LEMMA 2.4.: (cf. [45]). The product formula for inverse matrices, i.e. (M M 2 ) = M 2 M does not hold for pseudoinverse matrices. In general we must epect that (M M 2 ) + 6= M + 2 M +. But the following equations is correct: LEMMA 2.4.2: (cf. [45]). If Q is an orthogonal (n 2 n) matri, then (MQ) + = Q M + for any (m 2 n) matri M. THEOREM 2.4.3: For any (m 2 n) matri M with rank r one has: M + = (M T M) + M T = M T (MM T ) + : PROOF: Let M = Q 6Q T 2 be the singular value decomposition of M. Then we can epand (using that Q and Q 2 are orthogonal matrices) (M T M) + M T = (Q 2 6 T Q T Q 6Q T 2 ) + Q 2 6 T Q T {z } I = ((Q 2 6 T 6Q T 2 ) + Q 2 6 T Q T = Q 2 (6 T 6) + Q T Q T Q T {z } I = Q 2 (6 T 6) + 6 T Q T Net we have to show that (6 T 6) + 6 T =

31 We know that 6 has the form: 6 = ) 6 T 6 = 2 ) (6 T 6) + 6 T = 2 r 2 C... r A ) (6T 6) + = 2 r C A = r r... 2 r C C A C A A = 6+ So we can nish our proof with Q 2 (6 T 6) + 6 T Q T = Q Q T = M + : The second part of the formula can be proved in an analogous way. REMARK 2.4.4: In general M + M 6= I, but MM + always satises the conditions for projection matrices. PROOF: Hence M = Q 6Q T 2 ) M + M = Q Q T Q 6Q T 2 = Q Q T 2 {z } I (M + M) 2 = (Q Q T 2 ) 2 = Q Q T Q Q T is a diagonal matri of the special form: {z } I = l C A l C A = C A 3

32 So we can conclude Q 2 (6 + 6) 2 Q T 2 = Q Q T 2 = M + M REMARK 2.4.5: The importance of the formula given in Theorem lies in the numerical computation of pseudoinverse matrices. Let M be a (m 2 n) matri with m n then MM T is a small (m 2 m) matri and it can be much more convenient to compute (MM T ) + than to calculate M Singular Values and Frames THEOREM 2.5.: Let ff n g n2i be a frame for H, where I is a nite inde set, and let M be the (n 2 m) matri whose rows are given by the elements f i of the frame above. Then the corresponding frame bounds A; B can be obtained by the largest and smallest singular value of M A = 2 r ; B = 2 PROOF: We denote by r the rank of M and by Q 6Q 2 = M the singular value decomposition and by S the frame operator. Let f be a row vector of length m. (If one interpretes f as column vector, then fm means matri multiplication from the right.) Then Hence we obtain Sf = X n ) S = Q 2 6 T Q T hf; f n if n = f M {z T M} :=M S {z } M T with 6 T 6 = T Q 6Q2 {z } 2 M = Q 2 6 T 6Q T r B = sup jhsf; fij = kfk= C A : 3

33 = sup jhfq 2 6 T 6Q T ; fij 2 kfk= = sup kfk= jhfq 2 6 T 6; fq 2 ij {z} :=g = sup g=fq 2 jhg6 T 6; gij = k6 T 6k = 2 {z} :=g To show that A = r we consider the operator inequality: B I S A I Analogous A equals the largest singular value of the matri M S + = (M T M) +, which is representing S. Since by the above argument the singular values of this matri are given by : : : l, and l is the smallest singular value of M S, l is the largest singular value of M S +. Using the operator inequality for S we obtain A = l =) A = l If n m, it is better to compute the singular values of (MM T ), which coincide with the singular values of (MM T ). For practical purposes an iterative algorithm to compute the bounds A and B will be more useful. One possibility is to work with vector iteration, also referred as the power method or method of von Mises. k+ := S k k k k with 6= 2 H ) B = lim k! ks k k k k k ; if is not in the orthogonal complement of the maimal eigenvector. In a similar way we can obtain A by application of S instead of S. 32

34 Chapter 3 RECONSTRUCTION OF BAND-LIMITED SIGNALS FROM IRREGULAR SAMPLES 3. Introduction and notation The main question during this chapter is to recover a signal with limited band-width and known spectral support but unknown amplitudes from irregularly spaced sampling values. The irregular sampling problem for bandlimited -dimensional and 2-dimensional signals has found much interest these days (cf. [29], [32] for general introduction). In this report we restrict ourselves to the one-dimensional reconstruction problem. Since the implementation of signals and reconstruction methods on computers are all forced to be discrete, it is important to have a discrete interpretation of the theory (which deals with the continuous situation). Sometimes the interpretation will be only heuristic. In the discrete case the signals of length n are treated as functions on the cyclic group Z n. Let us begin with terminology and a description of the situation. For simplicity we concentrate on the Hilbert space case. The operator F for the Fourier transformation can be dened by 33

35 THEOREM 3..: (Theorem of Plancherel) (cf. [28]). There eists a unique operator F from L 2 (IR) onto L 2 ( IR), ^ having the properties:. Ff = ^f for f 2 L \ L 2 ( ^ IR); 2. kffk L 2 ( ^ IR) = kfk L 2 (IR) : As a consequence of Pontryagin's duality Theorem IR ^ can be identied with IR (cf. [28]), hence in the theorem above IR ^ can be replaced by IR. For a closed subset IR we denote by B 2 the set of all square integrable functions f 2 L 2 (IR), whose Fourier transform vanishes on the complement of. In other words, this set consists of all signals with nite energy with spectrum in. If we denote the spectrum of f by spec f, this means specf := supp ^f The Dirac measure at is denoted by. In the discrete case this is just the unit vector ( ; if y = ; (y) = ; if y 6= : The convolution for functions f; g on IR is dened as (f 3 g)() = Z + f( y)g(y)dy; < < +: The corresponding formula for the discrete case is given by (f 3 g)() = nx y= f( y)g(y): There is an important connection between convolution and Fourier transformation. LEMMA 3..2: [39] f 3 g = F ( ^f ^g) (here \" means pointwise multiplication.) 34

36 SINC-Function of length Figure 3.: In the beginning of this report we have introduced the shift operators T i f(), if we consider Z n, they can be interpreted as cyclic rotations. T i f can be considered as convolutions of f with the Dirac measure i. A main tool will be the function sinc, the inverse Fourier transform of the indicator function of. Fig. 3. illustrates the behavior of a sinc - function. The importance of sinc in the irregular sampling problem arises from following equation: THEOREM 3..3: For f 2 B 2 we have f 3 sinc = f: PROOF: Applying Lemma 3..2 gives F(f 3 sinc ) = Ff Fsinc (\" means again pointwise multiplication). Remember that supp ^f() = for 62 and the denition of sinc, hence all values of ^f, which are non-zero are multiplied by. Convolution by sinc can be seen as orthogonal projection of f onto B 2. 35

37 If is a bounded set the well known Sampling Theorem (Shannon, Whittaker,...cf. [43], [46], [47]) tells us, that for any suciently small lattice constant > it is possible to recover the function f completely from the set of regular sampling values (f(n) n2zz ) by forming following series: X f() = f(n)t n sinc (): n2zz The series is convergent in the L 2 -sense, i.e. the quadratic mean and uniformly. The critical value (the upper limit of all admissible values ) is usually called the Nyquist rate. The building blocks for this series representation are shifted versions of sinc, or more generally some g 2 L 2 with suciently small spectrum spec g, satisfying ^g(s) n. For the case of irregular sampling values (f( i )) i2i no direct sampling series in such a simple form can be epected, i.e. using only shifted versions of a single function as building blocks and the sampling values as coecients. Nevertheless it is still possible to recover a band-limited irregular sampled signal completely (if the sampling set is not too sparse), as we will show. There are various techniques to reconstruct signals, such as iterative algorithms, the frame method, direct methods (based on the solution of linear equations using pseudoinverse matrices) and last but not least methods working with matri iterations, i.e. matrices describe the changing of the coecients from one iteration step to the net. The net sections will give an overview over several techniques. 36

38 3.2 Iterative reconstruction algorithms Most of the iterative algorithms can be considered as kind of alternating mapping methods using the given irregular spaced sampling information about the signal f and the information about the spectral support of f. Actually the following iterative two-step-algorithm is typical for most of the methods.. As rst step an auiliary signal is constructed from the given sampling values of f by an approimation operator A. A is an linear operator, ecept in the POCS method. The resulting function Af can be either a step function or a piecewise linear interpolation (cf. Figures 3.5, 3.6, 3.7 and 3.9), or a well chosen discrete measure. We will describe this step in deltail later. It is important that only the sampling values of f are required to construct Af. 2. We project the in general not -band-limited signal Af into the space B 2 by an orthogonal projection P, in order to kill the high frequences of Af. One may think of this mapping as a smoothing operation which eliminates the discontinuity of Af. As mentioned this projection P can be desribed as a low-pass lter, using as transfer function the indicator function of the set, i.e. ( P d ( ^f(); if 2 ; f)() = ; if 62 : We can give an alternative describtion of P f by interpreting the projection of f onto B 2 as convolution of f with a sinc-type function (cf. Theorem 3..3 and the denition of sinc ). P f = sinc 3 f: After these two steps the rst cycle is nished and we have obtained an approimation signal f a = Af 3 sinc for f. The net iteration starts again with constructing a new auiliary signal Af () by the operator A. Let us denote the dierence f Af 3 sinc by f (). It is again in B 2 and we know its sampling values at ( i ) i2i. They are eactly the dierence between the original sampling values f( i ) and the values f a ( i ) of the approimation signal. We have to add f () to the ltered Af () to obtain f (2). Hence the (n + )-th iteration can be described by f (n+) = f (n) + A(f f (n) ) {z } 3sinc () AfAf (n) 37

39 where Af (n) is the unltered auiliary signal constructed of the \sampling error" f( i ) f (n) ( i ) (which is in general dierent from zero) with f () =. A reformulation of () provides f (n+) = f (n) Af (n) 3 sinc + Af 3 sinc : (2) {z } =:T f (n) {z } =:b The question about convergence of this iterative algorithm arises. Let us consider f n+ = T f n + b: Then kf n+ f n k = kt f n + b T f n bk = kt (f n f n k < kf n f n k; < ; if kt k < : Careful analysis provide convergence of the iteration algorithm for various choices of A and it is clear that the speed of convergence is the faster the smaller one can choose, i.e. the closer A is to the identity on B 2. For numerical applications the convolution is best implemented by a pair of Fourier transformations: (sinc 3 f) = F (Fsinc Ff) = F ( Ff): The net sections describe in detail how the approimation operator A can be chosen. Since the numerical implementation of an algorithm may reveal some properties, which do not follow from the present theory, we will give in the sections both theoretical statments as well as numerical results. For our eperiments we have generated band-limited comple signals, preferably with a length n which is corresponding to a power of 2, to make the discrete Fourier transformation fast. In MATLAB-notation the spectrum was usually of the form [; k], or [k; k] (i.e. [; k + ] [ [n k + ; n] for which we obtain sinc 2 IR. In the notation of the group Z n this means that the spectrum is a symmetric set around the neutral element in Z n, with k points to the 'left' and k points to the 'right'. We generated sampling sets with a number of dierent features (uniform, randomly, with variable density, jitter deformation of a lattice, : : : ). The plots in the following sections were selected with the intention of showing the typical behavior of the various methods. 38

40 The eperiments were carried out on AT-PC's, 386-PC's and on a SUN Workstation, using dierent versions of the mathematical software package MATLAB TM : The purchase of this equipment was possible because of support from the \Jubilaumsfond of the Austrian Nationalbank" and the \Universitatspreis der Wiener Wirtschaft 99" The Marvasti-method (cf. [33], [34], [35], [48]). The Marvasti approach has its origin in Wiley's Natural sampling method (cf. section below) and makes use of the formula f = f 3 sinc (see Lemma 3..3) and the interpretation of the shift operator T i as convolutions with Dirac measures. Thus the Marvasti approimation operator Af is nothing but a discrete measure of the form Af = px i= f( i ) i ; and the rst approimation signal can be obtained by convolving Af with sinc, hence f a = px i= f( i )T i sinc : Often speed of convergence of the algorithm increases rapidly, if one multiplies P p f( i= i) i by a global relaation parameter (as we see later, it make sense to talk about a global weight ) and we obtain A f = px i= f( i ) i : The speed of convergence depends on the choice of in the following way: If the relaation parameter is too large, the iteration will be divergence, on the other hand a small value of brings slow convergence but stability. The results of our eperiments indicate, that = n p is a good choice, where n is the length of the signal f and p is the number of sampling points. The value is a safety factor, which helps to prevent divergence. Numerical eperiments have shown that = :2 gives good convergence for sampling sets with approimately constant density, while still 39

41 the signal and the sampling set. spectral dimension : size of signal = size of spectrum = size of sampl.set = sitd256 Figure 3.2: avoiding divergence for slightly irregular sampling sets. For highly irregulary sampled signals one can enforce convergence in many cases by choosing a larger value for. It turns out clear that a large factor means on the one hand smaller chance for divergence, but on the other hand provides slower convergence even for sampling sequences, which are not very irregular. Fig. 3.3 shows the behavior of the Marvasti-algorithm (MAR) for dierent relaation parameters for the signal shown by Fig In section 3.3 we give an interpretation of the Marvasti method by using frames. Let us compare formula () or (2) of section 3.2. to formula () of section 3.3 and choose Sf = Af 3 T i sinc (this choice is possible, see section 3.3). We obtain (using Corollary..6) a very good choice for by = 2 A+B where A and B are the framebounds for the frame ft i sincg. But since it often is very dicult to compute the bounds A and B (hints how to calculate A and B are given in section 2.5), and one is interested in a fast approimation of the signal, it makes more sense to choose in the way described above, depending on the length of the signal and the number of sampling points. For sampling sets, which are not too irregular this choice of 2 is close to A+B. 4

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Atomic decompositions of square-integrable functions

Atomic decompositions of square-integrable functions Atomic decompositions of square-integrable functions Jordy van Velthoven Abstract This report serves as a survey for the discrete expansion of square-integrable functions of one real variable on an interval

More information

Linear Independence of Finite Gabor Systems

Linear Independence of Finite Gabor Systems Linear Independence of Finite Gabor Systems 1 Linear Independence of Finite Gabor Systems School of Mathematics Korea Institute for Advanced Study Linear Independence of Finite Gabor Systems 2 Short trip

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

A Short Course on Frame Theory

A Short Course on Frame Theory A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and

More information

Infinite-dimensional Vector Spaces and Sequences

Infinite-dimensional Vector Spaces and Sequences 2 Infinite-dimensional Vector Spaces and Sequences After the introduction to frames in finite-dimensional vector spaces in Chapter 1, the rest of the book will deal with expansions in infinitedimensional

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea Torsion of dierentials on toric varieties Klaus Altmann Institut fur reine Mathematik, Humboldt-Universitat zu Berlin Ziegelstr. 13a, D-10099 Berlin, Germany. E-mail: altmann@mathematik.hu-berlin.de Abstract

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

VII Selected Topics. 28 Matrix Operations

VII Selected Topics. 28 Matrix Operations VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

NOTES ON FRAMES. Damir Bakić University of Zagreb. June 6, 2017

NOTES ON FRAMES. Damir Bakić University of Zagreb. June 6, 2017 NOTES ON FRAMES Damir Bakić University of Zagreb June 6, 017 Contents 1 Unconditional convergence, Riesz bases, and Bessel sequences 1 1.1 Unconditional convergence of series in Banach spaces...............

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009 Eduardo Corona Functional Analysis: Assignment Set # Spring 2009 Professor: Fengbo Hang April 29, 2009 29. (i) Every convolution operator commutes with translation: ( c u)(x) = u(x + c) (ii) Any two convolution

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures 2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Approximately dual frames in Hilbert spaces and applications to Gabor frames

Approximately dual frames in Hilbert spaces and applications to Gabor frames Approximately dual frames in Hilbert spaces and applications to Gabor frames Ole Christensen and Richard S. Laugesen October 22, 200 Abstract Approximately dual frames are studied in the Hilbert space

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is FOURIER INVERSION 1. The Fourier Transform and the Inverse Fourier Transform Consider functions f, g : R n C, and consider the bilinear, symmetric function ψ : R n R n C, ψ(, ) = ep(2πi ), an additive

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

RIESZ BASES AND UNCONDITIONAL BASES

RIESZ BASES AND UNCONDITIONAL BASES In this paper we give a brief introduction to adjoint operators on Hilbert spaces and a characterization of the dual space of a Hilbert space. We then introduce the notion of a Riesz basis and give some

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences,

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences, Chapter 18 Topics of Functional Analysis In Chapter 14 there have been introduced the important concepts such as 1) Lineality of a space of elements, 2) Metric (or norm) in a space, 3) Compactness, convergence

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Math Subject GRE Questions

Math Subject GRE Questions Math Subject GRE Questions Calculus and Differential Equations 1. If f() = e e, then [f ()] 2 [f()] 2 equals (a) 4 (b) 4e 2 (c) 2e (d) 2 (e) 2e 2. An integrating factor for the ordinary differential equation

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Ane systems in L 2 (IR d ) II: dual systems Amos Ron & Zuowei Shen 1. Introduction and review of previous work 1.1. General We continue in this paper

Ane systems in L 2 (IR d ) II: dual systems Amos Ron & Zuowei Shen 1. Introduction and review of previous work 1.1. General We continue in this paper Ane systems in L 2 (IR d ) II: dual systems Amos Ron Zuowei Shen Computer Science Department Department of Mathematics University of Wisconsin-Madison National University of Singapore 1210 West Dayton

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

18.303: Introduction to Green s functions and operator inverses

18.303: Introduction to Green s functions and operator inverses 8.33: Introduction to Green s functions and operator inverses S. G. Johnson October 9, 2 Abstract In analogy with the inverse A of a matri A, we try to construct an analogous inverse  of differential

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

FRAMES AND TIME-FREQUENCY ANALYSIS

FRAMES AND TIME-FREQUENCY ANALYSIS FRAMES AND TIME-FREQUENCY ANALYSIS LECTURE 5: MODULATION SPACES AND APPLICATIONS Christopher Heil Georgia Tech heil@math.gatech.edu http://www.math.gatech.edu/ heil READING For background on Banach spaces,

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 2001

REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 2001 9 REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 21 ALLEN KNUTSON 1 WEIGHT DIAGRAMS OF -REPRESENTATIONS Let be an -dimensional torus, ie a group isomorphic to The we

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Mathematics 1. Part II: Linear Algebra. Exercises and problems

Mathematics 1. Part II: Linear Algebra. Exercises and problems Bachelor Degree in Informatics Engineering Barcelona School of Informatics Mathematics Part II: Linear Algebra Eercises and problems February 5 Departament de Matemàtica Aplicada Universitat Politècnica

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Taylor Series and Asymptotic Expansions

Taylor Series and Asymptotic Expansions Taylor Series and Asymptotic Epansions The importance of power series as a convenient representation, as an approimation tool, as a tool for solving differential equations and so on, is pretty obvious.

More information

1.3.1 Definition and Basic Properties of Convolution

1.3.1 Definition and Basic Properties of Convolution 1.3 Convolution 15 1.3 Convolution Since L 1 (R) is a Banach space, we know that it has many useful properties. In particular the operations of addition and scalar multiplication are continuous. However,

More information

if <v;w>=0. The length of a vector v is kvk, its distance from 0. If kvk =1,then v is said to be a unit vector. When V is a real vector space, then on

if <v;w>=0. The length of a vector v is kvk, its distance from 0. If kvk =1,then v is said to be a unit vector. When V is a real vector space, then on Function Spaces x1. Inner products and norms. From linear algebra, we recall that an inner product for a complex vector space V is a function < ; >: VV!C that satises the following properties. I1. Positivity:

More information

buer overlfows at intermediate nodes in the network. So to most users, the behavior of a packet network is not characterized by random loss, but by un

buer overlfows at intermediate nodes in the network. So to most users, the behavior of a packet network is not characterized by random loss, but by un Uniform tight frames for signal processing and communication Peter G. Casazza Department of Mathematics University of Missouri-Columbia Columbia, MO 65211 pete@math.missouri.edu Jelena Kovacevic Bell Labs

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

Kaczmarz algorithm in Hilbert space

Kaczmarz algorithm in Hilbert space STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a

More information

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M) RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information