TOWARDS A CLASSIFICATION OF REDUCED ORDER MODELING TECHNIQUES. G.Lippens, L. Knockaert and D. De Zutter

Size: px
Start display at page:

Download "TOWARDS A CLASSIFICATION OF REDUCED ORDER MODELING TECHNIQUES. G.Lippens, L. Knockaert and D. De Zutter"

Transcription

1 TOWARDS A CLASSIFICATION OF REDUCED ORDER MODELING TECHNIQUES G.Lippens, L. Knockaert and D. De Zutter May, 4th 2006 G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 1

2 OVERVIEW Introduction Oblique projections Idempotent projectors ROM based on orthogonal basis functions ROM based on Moebius transforms Classification scheme of Moebius related ROM Conclusion G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 1

3 MAJOR OBJECTIVES OF ROM METHODS Obtaining a smaller model Accuracy over a predefined bandwidth Preservation of structure Size of reduced model as small as possible At reasonable computational costs G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 2

4 MAJOR OBJECTIVES OF THIS TALK Relating ROM to oblique projections Oblique projections as a unifier Relating different Krylov subspace methods : the Möebius transform Discussing frequency properties of Krylov subspace methods Bandlimited reductions Reductions of resonant systems G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 3

5 PSfrag replacements OBLIQUE PROJECTIONS IN 2D w A a γ C B Q α β G H D u v G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 4

6 OBLIQUE PROJECTIONS IN 2D cont. b = u(v T u) 1 v T a (1) = u(v T u) 1 (v T a) = u AB cos 0 cos γ = cos γ AB u = DE u (2) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 5

7 3D PROJECTIONS The space R 3 Basis : (x = [100] T, y = [010] T, z = [001] T ). Projecting onto the 1D subspace S u = span(x) and parallel to the 2D subspace S w = span(y, z) Projector : P = x(x T x) 1 x T which is : Q x;//yz = (3) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 6

8 GENERALIZING : N-D OBLIQUE PROJECTIONS These observations also hold for a general R n space. If the columns of a matrix U span a space S U and the columns of V span the space S V, the projection operator projecting onto S U and parallel to SV ) is determined by : Q = U(V T U) 1 V T (4) If the matrix Q has nulspace N and range R, it can be proven that the spectral norm Q 2 satisfies : Q 2 = 1/ sin θ (5) where θ is the angle between S u and S w, defined by cos θ = max v T u, and where u and v are two unit vectors from the the range and the nullspace of Q respectively. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 7

9 IDEMPOTENT PROJECTORS The projection operator which projects parallel to the yz plane, onto the x axis : ( ) 1 ( Q 2 ) 1 Qx 2 = sup x 0 x 2 = Q[100] T 2 = 1 = sin(θ) = sin( π 2 ) (6) It is readily seen that a matrix Q = U(V T U) 1 V T is idempotent, i.e. Q 2 = Q from : U(V T U) 1 V T U(V T U) 1 V T = U(V T U) 1 V T Q has as only eigenvalues one or zero. Thus the range and the nullspace of Q form complementary subspaces while dim(r(q)) + dim(n(q)) = n. (7) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 8

10 COMPLEX OPERATORS Now consider the following operator with A being a complex matrix and V and U real : Q A = U(V T AU) 1 V T A (8) This operator is also an idempotent, and the range and nullspaces of this operator also form complementary subspaces in C n, a complex vector space where the scalar product applies. a b = a H b (9) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 9

11 IDEMPOTENS RELATED TO ROM We suppose that V T AU is nonsingular. This idempotent can consequently be written as : Q A = U(X H U) 1 X H (10) where X H = V T A. Q A is the operator projecting onto colspan(u) and parallel to colspan(x). It is an oblique projector which we will use in order to perform a model order reduction. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 10

12 HIGHER ORDER TRANSFER FUNCTIONS Linear systems of order n i P i( d dx )i x = Bu y = L T x P (s) is of order n and has a specific structure B and L represent the input and the output respectively. Preservation of structure : symmetry, positive definiteness,... F (s) = L T P (s) 1 B (11) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 11

13 INFORMATION LOSS Maintain the essential information of the original system N q matrices V, W which conserve the essential properties Some specific information will always be lost by projection Some information is much more essential than other information. Frequency dependent behavior is important Numerical issues : orthogonality G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 12

14 TWO MAJOR SUBCLASSES OF ROM METHODS (1) Expand H(s) in orthogonal basis functions Projection matrix is constructed from weighing H(s) with basis functions Nearly-exact moment matching determined by projection angle Higher order systems Structure can be preserved (2) Application of a Möbius transform Projection matrix is constructed from a Krylov series Exact moment matching can be proven First order systems G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 13

15 (1) ORTHOGONAL BASIS In order to obtain the matrices V and W we expand P (s) 1 B in a complete and orthonormal basis γ k (s). If we transpose the reduced and original transfer functions, a completely analogous reasoning follows. In that case, B and L are interchanged, as are V and W. P (s) 1 B = P (s) T L = r 1 k=0 r 1 k=0 k k γ k (s) + R r (s) (12) l k γ k (s) + R l (s) (13) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 14

16 PROJECTION VECTORS FROM SCALAR PRODUCT If the γ k (s) form an orthonormal basis with respect to the scalar product f g = 1 f(jω)g(jω)dω (14) 2π it is possible to calculate the coefficients T k by taking the L 2 scalar product of both sides of (12) with the γ k (s) : k k = 1 2π l k = 1 2π B B B P (iω) 1 B γ k (s)(iω) dω P (iω) T L γ k (s)(iω) dω (15) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 15

17 LEFT AND RIGHT SPACES Next, the N q matrices K r and K l are defined as K r = [k 0, k 1,..., k r 1 ] K l = [l 1, l 2,..., l r 1 ] (16) The columns of these two matrices K r and K l span the right subspace and the left subspace respectively. In specific cases, K r and K l can be Krylov matrices, i.e. they can be generated by stacking the columns A n R, (r = 0... q). G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 16

18 LEFT AND RIGHT SPACES cont. If K l, K r are such that det(kl T K r) 0, there exists an idempotent Q such that QK r = K r Q T K l = K l Q = V W T W T V = I q (17) Now if we suppose that K T l K r is nonsingular, let us observe Q I = V W T, with W T V = I with I the identity matrix, such that Q I K r = K r. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 17

19 CLOSENESS OF PROJECTIONS This implies while Q P (s) Q I = Q I. Multiplication of the first equation of (12) with Q P (s) : Q I k k = k k k = 0,..., r 1 (18) V (W T P (s)v ) 1 W T P (s)p 1 (s)b = V (W T P (s)v ) 1 W T B = r 1 k=0 k k γ k (s) + Q P (s) R r (s) (19) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 18

20 CLOSE EXPANSIONS After left multiplying (19) with L T, we write down the following expansions for the transfer functions : F (s) = F R (s) = r 1 k=0 r 1 k=0 L T k k γ k (s) + L T R r (s) (20) L T k k γ k (s) + L T Q P (s)r r (s) (21) Conditions for closeness : F (s) and F R (s), have approximately the first r coefficients in their {γ k (s)} expansion in common: R r is small enough θ associated with the idempotent Q P (s) is close to π 2 G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 19

21 THE SYMMETRIC CASE In the symmetric case P (s) = P (s) T, L = B, it is clear that we have k k = l k and K = L, implying W = V and Q I = Q T I = V V T, which is then an orthogonal projector and θ = π 2. In that case the reduced order transfer function simplifies to F R (s) = L T V (V T P (s)v ) 1 V T B (22) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 20

22 NEARLY ORTHOGONAL The idempotents which can be written as Q 1 = G(H H P G) 1 H H P, are always close to orthogonal projections. Consider the orthogonal projector Q 2 = G(H H G) 1 H H. Q 2 1 = Q 1 Q 1 Q 2 = G(H H P G) 1 H H P G(H H G) 1 H H = Q 2 Q 2 2 = Q 2 Q 2 Q 1 = G(H H G) 1 H H G(H H P G) 1 H H P = Q 1 (23) from which we deduce : (Q 2 Q 1 ) 2 = Q Q 2 2 Q 2 Q 1 Q 1 Q 2 = 0 (24) or Q 2 Q 1 is nilpotent. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 21

23 (2) FIRST ORDER SYSTEMS The K and L matrices are built from a matrix series Important collection of reduction algorithms. Krylov techniques interrelated by a Möbius transform. Exact moment matching. Reduction of first order systems : Consider the following system description where P (s) is first order : { Cẋ = Gx + Bu y = L T x (25) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 22

24 TRANSFER FUNCTION (FIRST ORDER) The transfer matrix belonging to (25) describing the input-output relation in the Laplace domain is given by : H(s) = y(s) u(s) = LT (G + sc) 1 B (26) and for the reduced system we are looking for, the transfer matrix function reads : H R (s) = y(s) u(s) = L R T (G R + sc R ) 1 B R (27) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 23

25 THE MOEBIUS TRANSFORM This subset of ROM techniques under scrutiny is based on the transform: s = aσ + b cσ + d σ = b sd a sc (28) where not all a, b, c, d such that the related determinant : = ad bc 0 (29) This transform is known as a bilinear or Möbius transformation, it has the property to map the imaginary axis onto the unit circle in the complex plane. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 24

26 TRANSFER FUNCTION IN TRANSFORMED DOMAIN We apply this operation consequently to the original transfer function: and to the reduced transfer function : H(σ) = (cσ + d)l T (I σa) 1 R (30) H R (σ) = (cσ + d)l T R (I σa R ) 1 R R (31) where the new matrices A and R are related to C, G and B as: A = (dg + bc) 1 (cg + ac) (32) R = (dg + bc) 1 B (33) with similar expressions hold for A R and R R. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 25

27 SPECTRAL RADIUS OF EXPANSION valid for σ < 1/ρ(A), with (I σa) 1 = i=0 σa i (34) ρ(a) = max{ λ(a) } (35) with λ(a) any eigenvalue of the matrix A. Expanding around σ = 0: ( ) H(σ) = (cσ + d)l T A i σ i R i=0 H(σ) = (cσ + d) L T A i Rσ i (36) i=0 G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 26

28 TRANSFORMED BASIS FUNCTIONS The basisfunctions γ k (s) : γ k (s) = ( c b sd ) ( a sc + d b sd ) k a sc = (ad bc) These expansion functions are not necessarily orthogonal. The expansion coefficients k k and l k read : ( b + sd)k (a sc) k+1 (37) l k = ( A T ) k L k k = A k R (38) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 27

29 LEFT AND RIGHT KRYLOV SPACES The right Krylov subspace is thus determined by : K(A, R, q) = colspan [ R, AR, A 2 R,..., A q 1 R ] (39) and the left Krylov subspace is defined as: K(A T, L, q) = colspan [L, A T L, ( A T ) 2 ( L,..., A T ) ] q 1 L (40) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 28

30 MOMENT MATCHING The traditional way of writing the expansion of the transfer function is : H = (cσ + d) M i σ i (41) i=0 The scalar coefficients M i are called the moments. The transfer matrix of the reduced system becomes : H R (σ) = (cσ + d) L T R A i R R R σ i (42) = (cσ + d) i=0 M R iσ i (43) i=0 G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 29

31 MOMENT MATCHING contd. Now we impose that the moments of the reduced system should match the moments of the original system: Numerical errors Frequency dependent errors M i = L T A i R = L T R A i R R R = M R i (44) H(σ) = H R (σ) + (cσ + d)o(σ q ) (45) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 30

32 THE REDUCED FIRST ORDER SYSTEM Stacking the column vectors v i and w i into matrices V and W : P R (s) = W T P (s)v B R = V T B L R = W T L (46) in the case one employs both the left and right Krylov spaces, and : P R (s) = V T P (s)v B R = V T B L R = V T L (47) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 31

33 THE FIRST ORDER PROJECTOR We obtain the transfer function : H R (s) = L T V (W T P (s)v ) 1 W T B = Q P (s)p (s) 1 B (48) Krylov subspace reduced order modeling algorithms constitute a subclass of methods The matching of the first q moments is exact, in contrast to the more general projection methods G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 32

34 LAGUERRE-SVD σ = s α s + α where α is real while the transformation parameters a = α, b = α, c = 1 and d = 1. The resulting A and R matrices then read : (49) A = (αc + G) 1 (αc G) R = (αc + G) 1 B (50) The change of variables maps the s-domain frequency variable onto the unit circle. The transfer function is written as : H(s) = L T (G + sc) 1 B = 2α s + α i=0 ( ) i s α M i (51) s + α G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 33

35 LAGUERRE BASIS We observe that the γ i (s) are in fact the Laguerre functions, which are written in the Laplace domain as : which read in the time domain: γ i (s) = 2α s + α ( ) i s α (52) s + α φ α i (t) = 2αe αt l i (2αt) (53) where α is the scaling parameter and l i (t) is the Laguerre polynomial l i (t) = et i! d i dt i ( e t t i) (54) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 34

36 LAGUERRE MOMENTS In terms of moments, we obtain : H(σ) = (1 σ)l T ((αc + G) + σ(αc G)) 1 B (55) = 1 σ 2α i=0 M i σ i (56) Optimal estimate for the Laguerre parameter α is motivated, this value being : α 2πf max (57) Note that γ i (s) is the product of the low pass filter ( ) n s α filter s+α. 2α s+α with 3dB bandwith f max and the all pass G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 35

37 LAGUERRE EXPANSION As a consequence, advantageous frequency properties emerge. We can also rewrite the transfer function as : H(s) = L T (s + C 1 G) 1 C 1 B = L T γ n (C 1 G)C 1 B γ n (s) (58) n=0 The summation in (58) is a consequence of the identity : 1 s + u = n=0 γ n (s)γ n (u) R(s), R(u) 0 (59) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 36

38 KRYLOV MATRIX Defining the column vectors k i as : k i = γ i (C 1 G)C 1 B = 1 2π we consequently construct the matrix (iωc + G) 1 B γ n (iω) dω i = 0,..., q 1 (60) K = [k 0, k 1,..., k q 1 ] (61) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 37

39 BANDLIMITED LAGUERRE Find a new set of basis functions, orthonormal over B = [ β, α] [α, β], with β > α > 0. Construct a frequency transform which maps the orthogonal Laguerre basis onto the desired new basis : k i = 1 2π where the γ i are given by : B (iωc + G) 1 B γ i (iω) dω n = 0,..., q 1 (62) γ i (s) = τ(s) φ i (η(s)) i = 0, 1,... (63) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 38

40 FREQUENCY TRANSFORM The φ i are the Laguerre expansion functions discussed in the previous section with η(s) = β2 s s 2 + α 2 s 2 + β 2 τ(s) = β s2 + s β 2 + 2αβ 3α 2 + αβ s(s 2 + β 2 ) (64) The equations (64) emerge as a result of a coordinate transform : ω = ζ(ν) def = β2 ν ν 2 α 2 β 2 ν 2 α ν β (65) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 39

41 BANDLIMITED PROJECTION MATRIX As these basis functions can not be derived from a particular Möbius transform, they yield a projection matrix which is not a Krylov matrix anymore : K = [k 0, k 1,..., k q 1 ] (66) The projection matrix U consists of a basis for colspan(k) and can e.g. obtained from an SVD which ultimately provides the bandlimited reduced order model K = UΣV T (67) F R (s) = L T U ( s U T CU + U T GU ) 1 U T B (68) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 40

42 EXAMPLE : A TRANSMISSION LINE Comparison of Laguerre-SVD and Bandlimited Laguerre reductions Re [Z(f)] (Ω) Laguerre SVD q = 90 Bandlimited q = 90, m = 10 Bandlimited q = 90, m = 20 Laguerre SVD q = 90 Bandlimited q = 90, m = 10 Bandlimited q = 90, m = 20 f (Hz) Simulation Bandwidth Simulation bandwidth f (Hz) x 10 9 G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 41

43 EXAMPLE : A PATCH ANTENNA Bandlimited reduction of a dedicated patch antenna system with Bandlimited Laguerre Direct inversion Bandlimited Magnitude Simulation bandwidth f (Hz) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 42

44 KAUTZ BASIS FUNCTIONS Generalizing the bandlimited laguerre functions : φ 2n (s) = 2τ (s + τ 2 + σ 2 ) φ 2n+1 (s) = 2τ (s τ 2 + σ 2 ) which is the two-parameter Kautz basis. This basis is suited for resonant systems : ((s τ) 2 + σ 2 ) n ((s + τ) 2 + σ 2 ) n+1 n = 0, 1,... ((s τ) 2 + σ 2 ) n ((s + τ) 2 + σ 2 ) n+1 n = 0, 1,... (69) φ n (iω) 2 def = M(ω) = 2τ(ω 2 + τ 2 + σ 2 ) (τ 2 + σ 2 ω 2 ) 2 + 4ω 2 τ 2 (70) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 43

45 BANDLIMITED KAUTZ Consider the functions where and γ n (s) = ρ(s) φ n (η(s)) n = 0, 1,... (71) η(s) = β2 s ρ(s) = β s2 + s β 2 + 2αβ 3α 2 + αβ s(s 2 + β 2 ) satisfying the narrowband orthonormality conditions. s 2 + α 2 s 2 + β 2 (72) (73) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 44

46 ERROR CURVES COMPARED Bandlimited Kautz compared against Multipoint Padé L1 error norms q = q = q = q = x 10 9 f f (GHz) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 45

47 CONVERGENCE COMPARED Bandlimited Laguerre (full) - Kautz-symmetric (dotted) - Kautz-resonance (dash-dotted) Relative error γ = 1e8, σ = 5e9 γ = σ = γ BL γ = γ BL, σ = q G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 46

48 HIGH FREQUENCY KRYLOV TYPE REDUCTIONS The Möbius transform σ = 1/s (a = 0, b = 1, c = 1, d = 0) can be used advantageously. The transfer function reads : H(σ) = 1 σ LT (I + σa) 1 R (74) and the associated matrices A and R are : A = C 1 G R = C 1 B (75) The related γ k (s) are γ k (s) = 1 s k+1 (76) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 47

49 PADE VIA LANCZOS the relevant Möbius transform is defined by : σ = s s 0 (77) which is obtained by substituting (a = 1, b = s 0, c = 0, d = 1) in (28). The associated matrices A and R become : A = (G + s 0 C) 1 C R = (G + s 0 C) 1 B (78) and we need a Taylor expansion about s = s 0. The related γ k (s) are : γ k (s) = (s s 0 ) k (79) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 48

50 LANCZOS PROJECTION The Lanczos process tridiagonalizes A according to: W T AV = DT (80) Here, T is a tridiagonal matrix and D = W T V (81) The matrices W and V are found to be biorthogonal. H(σ) = L T (I σa) 1 R (82) H R (σ) = L T V ( W T V σw T AV ) 1 W T R (83) = L T V (I σt ) 1 D 1 W T R (84) G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 49

51 ARNOLDI Krylov subspace method based on the Möbius transform σ = s s 0. The associated A matrix is reduced to an upper Hessenberg matrix H q while : AV = V H q V T V = I 1 (85) Modified Gram-Schmidt orthogonalization. Only one LU decomposition and q forward/backward substitutions are needed. Process may suffer loss of orthogonality in V while H q is constructed. Ways to circumvent this problem have been found, as e.g. the Implicit Restarted version. As an expansion point σ 0 = 2πf max is usually chosen. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 50

52 PRIMA σ = s (86) which is equivalent to substituting (a = 1, b = 0, c = 0, d = 1) in (28). The related A and R matrices then read : A = G 1 C R = G 1 B (87) C 0. v. = R G v + Bu 0 L i G T 0 i (88) y = B T v i where R, C and L are describing the resistors, capacitors and inductors respectively. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 51

53 CONCLUSIONS Similarities between different ROM methods have been highlighted. Systems are seen as being obliquely projected onto a specific subspace. Preservation of structure is an important issue. Frequency dependent convergence behavior has been discussed. An important subclass of first order reduction methods is determined by a Möbius transform. Bandlimited reductions are efficient for large systems. Resonant systems can be reduced faster with an appropriate choice of basis functions. G. Lippens 2nd Workshop Adv. Comp. Electromagnetics May 2006 p 52

54 Second Workshop on Advanced Computational Electromagnetics Wednesday, May 3, 2006 Ghent May 3-4, Program - 8h30 9h00 Welcome Andreas Cangellaris, University of Illinois at Urbana-Champaign, USA Model Order Reduction of Finite Element Models of Electromagnetic Systems Using Krylov Subspace Methods 10h30 Coffee Break 10h45 Dominique Lesselier, Supélec, France 3-D Electromagnetic Inverse Scattering Methodologies with Emphasis on the Retrieval of Small Objects 12h15 Lunch 14h00 Rob Remis, Delft University of Technology, The Netherlands Low- and High-Frequency Model-Order Reduction of Electromagnetic Fields 15h30 Coffee Break 15h45 Hendrik Rogier, Ghent University, Belgium State-of-the-art Antenna Systems in Mobile Communications Thursday, May 4, h30 9h00 Welcome Andreas Cangellaris, University of Illinois at Urbana-Champaign, USA Comprehensive Electromagnetic Modeling of On-Chip Noise Generation and Coupling During High Speed Switching 10h30 Coffee Break 10h45 Davy Pissoort, Ghent University, Belgium Fast and Accurate Modeling of Photonic Crystal Devices 12h15 Lunch 14h00 Tom Dhaene, University of Antwerp, Belgium Electromagnetic-Based Scalable Metamodeling 15h30 Coffee Break 15h45 Luc Knockaert, Gunther Lippens, and Daniël De Zutter, Ghent University, Belgium Towards a Classification of Projection-Based Model Order Reduction

55 Second Workshop On Advanced Computational Electromagnetics Organized by Dr. D. Pissoort Prof. D. De Zutter Prof. F. Olyslager Prof. A. Franchois

Fast and Efficient Modeling of Photonic Crystal Devices

Fast and Efficient Modeling of Photonic Crystal Devices Fast and Efficient Modeling of Photonic Crystal Devices D. Pissoort, E. Michielssen, F. Olyslager, and D.De Zutter Outline 1. Introduction 2. Part I: 2D Photonic Crystal Devices 3. Part II: Photonic Crystal

More information

Passive Reduced Order Multiport Modeling: The Padé-Laguerre, Krylov-Arnoldi-SVD Connection

Passive Reduced Order Multiport Modeling: The Padé-Laguerre, Krylov-Arnoldi-SVD Connection Passive Reduced Order Multiport Modeling: The Padé-Laguerre, Krylov-Arnoldi-SVD Connection Luc Knockaert and Daniel De Zutter 1 Abstract A reduced order multiport modeling algorithm based on the decomposition

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

ECE 275A Homework # 3 Due Thursday 10/27/2016

ECE 275A Homework # 3 Due Thursday 10/27/2016 ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

EE5900 Spring Lecture 5 IC interconnect model order reduction Zhuo Feng

EE5900 Spring Lecture 5 IC interconnect model order reduction Zhuo Feng EE59 Spring Parallel VLSI CAD Algorithms Lecture 5 IC interconnect model order reduction Zhuo Feng 5. Z. Feng MU EE59 In theory we can apply moment matching for any order of approximation But in practice

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Magnitude Vector Fitting to interval data

Magnitude Vector Fitting to interval data Available online at wwwsciencedirectcom Mathematics and Computers in Simulation 80 (2009) 572 580 Magnitude Vector Fitting to interval data Wouter Hendrickx a,, Dirk Deschrijver b, Luc Knockaert b, Tom

More information

Answer Keys For Math 225 Final Review Problem

Answer Keys For Math 225 Final Review Problem Answer Keys For Math Final Review Problem () For each of the following maps T, Determine whether T is a linear transformation. If T is a linear transformation, determine whether T is -, onto and/or bijective.

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Beyond Vectors. Hung-yi Lee

Beyond Vectors. Hung-yi Lee Beyond Vectors Hung-yi Lee Introduction Many things can be considered as vectors. E.g. a function can be regarded as a vector We can apply the concept we learned on those vectors. Linear combination Span

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Vectors. Vectors and the scalar multiplication and vector addition operations:

Vectors. Vectors and the scalar multiplication and vector addition operations: Vectors Vectors and the scalar multiplication and vector addition operations: x 1 x 1 y 1 2x 1 + 3y 1 x x n 1 = 2 x R n, 2 2 y + 3 2 2x = 2 + 3y 2............ x n x n y n 2x n + 3y n I ll use the two terms

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Fundamentals of Matrices

Fundamentals of Matrices Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Fourier and Wavelet Signal Processing

Fourier and Wavelet Signal Processing Ecole Polytechnique Federale de Lausanne (EPFL) Audio-Visual Communications Laboratory (LCAV) Fourier and Wavelet Signal Processing Martin Vetterli Amina Chebira, Ali Hormati Spring 2011 2/25/2011 1 Outline

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1 CAAM 6 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination Instructions: Time limit: uninterrupted hours There are four questions worth a total of 5 points Please do not look at the questions until you begin

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Math 21b. Review for Final Exam

Math 21b. Review for Final Exam Math 21b. Review for Final Exam Thomas W. Judson Spring 2003 General Information The exam is on Thursday, May 15 from 2:15 am to 5:15 pm in Jefferson 250. Please check with the registrar if you have a

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Math Real Analysis II

Math Real Analysis II Math 4 - Real Analysis II Solutions to Homework due May Recall that a function f is called even if f( x) = f(x) and called odd if f( x) = f(x) for all x. We saw that these classes of functions had a particularly

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential

Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential, Tobias Breu, and Tilo Wettig Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg,

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces Introduction Recall in the lecture on vector spaces that geometric vectors (i.e. vectors in two and three-dimensional Cartesian space have the properties of addition, subtraction,

More information

Homework 11 Solutions. Math 110, Fall 2013.

Homework 11 Solutions. Math 110, Fall 2013. Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting

More information