Linear Hyperbolic Systems

Size: px
Start display at page:

Download "Linear Hyperbolic Systems"

Transcription

1 Linear Hyperbolic Systems Professor Dr E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it October 8, / 56

2 We study some basic mathematical aspects of linear systems of hyperbolic equations in one space dimension We illustrate the theory through the study, in complete detail, of a simple example with physical meaning, namely a linearised model for blood flow 2 / 56

3 Basic Theory Consider the one-dimensional, time-dependent system of m linear hyperbolic equations with source terms t Q(x, t) + A x Q(x, t) = S(Q(x, t)). (1) Q: unknowns, A: matrix of coefficients, constant, S(Q): source terms: Q = q 1... q i... q m, A = a a 1i... a 1m a i1... a ii... a im a m1... a mi... a mm Special case: the linear advection equation, S(Q) = s 1... s i... s m. (2) t q(x, t) + λ x q(x, t) = 0 (3) 3 / 56

4 Eigenvalues and eigenvectors. Definition: The eigenvalues of system (1) are the roots of the characteristic polynomial P (λ) Det(A λi) = 0. (4) I: m m unit matrix, λ: a parameter, λ i : eigenvalues, in increasing order λ 1 λ 2... λ i... λ m 1 λ m. (5) Definition: A right eigenvector R i of A corresponding to λ i is R i = [r 1i, r 2i,..., r ii,..., r mi ] T, (6) such that AR i = λ i R i. (7) The m right eigenvectors corresponding to the eigenvalues (5) are R 1, R 2,..., R i,..., R m 1, R m. (8) 4 / 56

5 Definition: vector such that A left eigenvector L i of A corresponding to λ i is the row L i = [l i1, l i2,..., l ii,..., l im ], (9) L i A = λ i L i. (10) The m eigenvalues (5) generate corresponding m left eigenvectors L 1, L 2,..., L i,..., L m 1, L m. (11) Definition: Hyperbolic system. A system (1) is said to be hyperbolic if A has m real eigenvalues and a corresponding complete set of m linearly independent eigenvectors. Note: For hyperbolicity, the eigenvalues are not required to be all distinct. What is important is that there is a complete set of linearly independent eigenvectors, corresponding to the real eigenvalues. 5 / 56

6 More definitions Strictly hyperbolic system. A hyperbolic system is said to be strictly hyperbolic if all eigenvalues of the system are distinct. Weakly hyperbolic system. A system may have real but not distinct eigenvalues and still be hyperbolic if a complete set of linearly independent eigenvectors exists. However if all eigenvalues are real but no complete set of linearly independent eigenvectors exists then the system is called weakly hyperbolic. Definition: Orthonormality of eigenvectors. The eigenvectors L i and R j are rthonormal if L i R j = { 1 if i = j, 0 if i j. } (12) 6 / 56

7 Example: an abstract model system Consider the homogeneous linear system with constant coefficients [ ] [ ] [ ] [ ] q1 (x, t) 0 a q1 (x, t) 0 t + q 2 (x, t) b 0 x =, (13) q 2 (x, t) 0 with a and b two real numbers. We first find the eigenvalues ([ ] [ ]) 0 a λ 0 P (λ) Det(A λi) = Det = 0 b 0 0 λ λ 1 = ab, λ 2 = ab. (14) The eigenvalues are real if a and b are real numbers of the same sign. If a = 0 or b = 0 then the eigenvalues are real but not distinct. If a and b are real, distinct from zero and of opposite sign, the eigenvalues are complex and the system is not hyperbolic. 7 / 56

8 Example: an abstract model system Assume a right eigenvector [r 1, r 2 ] T corresponding to an eigenvalue λ [ ] [ ] [ ] 0 a r1 λr1 =, (15) b 0 r 2 λr 2 from which we obtain ar 2 = λr 1, br 1 = λr 2. (16) The equations are not independent. We choose to use the first one r 2 = λ a r 1. (17) For λ = λ 1 = ab and setting r 1 = β 1 we obtain eigenvector R 1 r 1 = β 1, r 2 = ab b a β 1 = a β 1. (18) 8 / 56

9 For λ = λ 2 = ab, setting r 1 = β 2 we obtain R 2 as r 1 = β 2, r 2 = ab a β 1 = Then the two right eigenvectors are 1 R 1 = β 1 b, R 2 = β 2 a b a β 2. (19) 1 b a. (20) These vectors are linearly independent and thus system (12) is hyperbolic, provided a and b are two non-zero real numbers of equal sign. The system is also strictly hyperbolic, as λ 1 λ 2. Exercise. Find the left eigenvectors corresponding to the two real eigenvalues (14). 9 / 56

10 Diagonalization and characteristic variables Consider R = [R 1,..., R i,..., R m ]: matrix whose columns are the right eigenvectors; Λ: diagonal matrix formed by eigenvalues R = r r 1i... r 1m r i1... r ii... r im r m1... r mi... r mm, Λ = λ λ i λ m (21) Proposition. If A is the coefficient matrix of a hyperbolic system (1) then. A = RΛR 1 or Λ = R 1 AR. (22) In this case A is said to be diagonalisable and consequently system (1) is said to be diagonalisable.11 Proof (omitted). 10 / 56

11 Characteristic variables The existence of R 1 makes it possible to define the characteristic variables C = [c 1, c 2,..., c m ] T via C = R 1 Q Q = RC. (23) Calculating the partial derivatives (constant coefficients) t Q = R t C, x Q = R x C and directly substituting these expressions into equation (1) gives R t C + AR x C = S. Multiplication of this equation from the left by R 1 and use of (22) gives t C + Λ x C = Ŝ, Ŝ = R 1 S. (24) 11 / 56

12 This is called the canonical form or characteristic form of system (1). Assuming Ŝ = 0 and writing the equations in full, we have c 1 λ c t c i λ i x c i... = c m λ m c m 0 Clearly, each equation i-th of this system is of the form. (25) t c i + λ i x c i = 0, i = 1,..., m (26) and involves the single unknown c i (x, t), which is decoupled from the remaining variables. Moreover, this equations is identical to the linear advection equation (3), with characteristic speed is λ i. We have m decoupled equations, each one defining a characteristic curve. 12 / 56

13 Thus, at any chosen point (ˆx, ˆt) in the x-t half-plane there are m characteristic curves x i (t) passing through (ˆx, ˆt) and satisfying the m ODEs dx i dt = λ i, for i = 1,..., m. (27) t dx 1 dt = λ 1 dx i dt = λ i dx m dt = λ m ˆt (ˆx, ˆt) x (0) m ˆx Fig. 1. The solution at a point (ˆx, ˆt) depends on the initial condition at the foot x (0) i of each characteristic x i (t) = x (0) i + λ i t. x (0) i x (0) 1 x 13 / 56

14 Each characteristic curve x i (t) = x (0) i + λ i t intersects the x-axis at the point x (0) i, which is the foot of the characteristic passing through the point (ˆx, ˆt). The point x (0) i is given as x (0) i = ˆx λ iˆt, for i = 1, 2,..., m. (28) Each equation (26) is just a linear advection equation whose solution at (ˆx, ˆt) is given by c i (ˆx, ˆt) = c (0) i (x (0) i ) = c (0) i (ˆx λ iˆt), for i = 1, 2,..., m, (29) where c (0) i (x) is the initial condition, at the initial time. The initial conditions for the characteristic variables are obtained from the transformation (23) applied to the initial condition Q(x, 0). Given the assumed order (5) of the distinct eigenvalues the following inequalities are satisfied. x (0) m < x (0) m 1 <... < x(0) 2 < x (0) 1. (30) 14 / 56

15 Definition: Domain of dependence. The interval [x (0) m, x (0) 1 ] is called the domain of dependence of the point (ˆx, ˆt). See Fig. 1. The solution at (ˆx, ˆt) depends exclusively on initial data at points within the interval [x (0) m, x (0) 1 ]. This is a distinguishing feature of hyperbolic equations. The initial data outside the domain of dependence can be changed in any manner we wish but this will not affect the solution at the point (ˆx, ˆt). 15 / 56

16 The general initial-value problem Proposition: The solution of the general IVP for the linear homogeneous hyperbolic system PDEs: t Q + A x Q = 0, < x <, t > 0, (31) IC: Q(x, 0) = Q (0) (x) is given by Q(x, t) = m c i (x, t)r i. (32) The coefficient c i (x, t) of the right eigenvector R i is a characteristic variable. i=1 16 / 56

17 Proof: We find the general solution in terms of the characteristic variables C by solving the IVP PDEs: t C + Λ x C = 0, < x <, t > 0, IC: C(x, 0) = C (0) (x). (33) Here the initial condition is C (0) (x) = [c (0) 1 (x),..., c(0) i (x),..., c (0) m (x)] T = R 1 Q (0) (x) (34) where Q (0) (x) is initial conditions of the original problem, denoted as Q (0) (x) = [q (0) 1 (x),..., q(0) i (x),..., q m (0) (x)] T. The solution of IVP (33) is direct. For each component c i (x, t) we have c i (x, t) = c (0) i (x λ i t), for i = 1,..., m. (35) 17 / 56

18 In terms of the original variables Q the solution is found by transforming back, that is Q = RC, or q 1 = c 1 (x, t)r 11 + c 2 (x, t)r c m (x, t)r 1m, q i = c 1 (x, t)r i1 + c 2 (x, t)r i c m (x, t)r im, q m = c 1 (x, t)r m1 + c 2 (x, t)r m c m (x, t)r mm, or q 1 q 2. q m = c 1(x, t) r 11 r 21. r m1 +c 2(x, t) r 12 r 22. r m c m(x, t) r 1m r 2m. r mm. More succinctly and the sought result follows. Q(x, t) = m c i (x, t)r i (36) i=1 18 / 56

19 Remarks: The function c i (x, t) is the coefficient of R i in an eigenvector expansion of the solution vector Q(x, t) Given a point (x, t) in the x-t plane, the solution Q(x, t) depends only on the initial data at the m points x (i) 0 = x λ i t. See Fig. 1 These points are the intersections of the characteristics of speed λ i with the x axis Solution (32) represents superposition of m waves of unchanged shape c (0) i (x)r i propagated with speed λ i 19 / 56

20 The Riemann problem Proposition: The solution of Riemann problem PDEs: t Q + A x Q = 0, { < x <, t > 0, IC: Q(x, 0) = Q (0) QL if x < 0, (x) = Q R if x > 0, (37) with Q L and Q R two constant vectors, is given by I m Q(x, t) = c ir R i + c il R i, (38) i=1 i=i+1 where m m c il R i = Q L, c ir R i = Q R (39) i=1 i=1 and I = I(x, t) is the maximum value of i for which x λ i t > / 56

21 Remarks on the solution of the Riemann problem The initial data consists of two constant vectors Q L and Q R, separated by a discontinuity at x = 0 This is a special case of IVP (31) The structure of the solution of the Riemann problem (37) is depicted in Fig. 2, in the x-t plane The solution consists of a fan of m waves emanating from the origin, one wave for each eigenvalue λ i. The speed of the wave i is the eigenvalue λ i These m waves divide the x-t half plane into m + 1 constant regions { R i = (x, t)/ < x < ; t 0; λ i < x } t < λ i+1, (40) for i = 0, 1,..., m. 21 / 56

22 Solving the Riemann problem means finding constant values for Q in regions R i for = 1,..., m 1. x t = λ 2 t x t = λ i x t = λ 1 R 1 R i x t = λ m R 0 R m Q L Q R x = 0 Fig. 2. Structure of the solution of the Riemann problem. There are m waves that divide the half x-t plane into m + 1 regions (wedges) R i, with i = 0, 1,..., m. x 22 / 56

23 Proof: First we recall the following notation Q L = [q 1L,..., q il,..., q ml ] T, Q R = [q 1R,..., q ir,..., q mr ] T, C L = [c 1L,..., c il,..., c ml ] T, C R = [c 1R,..., c ir,..., c mr ] T, C L = R 1 Q L, C R = R 1 Q R, where C L, C R are the characteristic variables. (41) The form of the sought solution is the same as that for the general IVP, see (36), for which we only need to find the coefficients c i (x, t). Thus we need to solve the associated Riemann problem for the characteristic variables with initial conditions: C L, C R. To this end one solves the RP for each component c i (x, t) The required data comes from solving the following two linear systems: m m Q L = c il R i = RC L, Q R = c ir R i = RC R. (42) i=1 i=1 23 / 56

24 The two linear systems for the coefficients C L and C R, in full, read r r i1... r 1m r i1... r ii... r im =, r m1... r mi... r mm r r 1i... r 1m r i1... r ii... r im r m1... r mi... r mm c 1L... c il... c ml c 1R... c ir... c mr = q 1L... q il... q ml q 1R... q ir... q mr. (43) These two linear systems for C L, C R are easily solved using standard methods. 24 / 56

25 In terms of the characteristic variables, for i = 1,..., m, we have PDE: t c i + λ i x c i = 0, < x <, t > 0, c il if x < 0, IC: c (0) i (x) = c ir if x > 0, The solutions of these scalar Riemann problems are given by c il if x λ i t < 0 x/t < λ i, c i (x, t) = c (0) i (x λ i t) = c ir if x λ i t > 0 x/t > λ i. For a given (x, t) there is an integer I(x, t) and an associated eigenvalue λ I such that (x, t) belongs to region R I, that is λ I < x/t < λ I+1. See Fig. 3. Then we have } x λ i t > 0 for i = 1, 2,..., I coefficients c Ri, x λ i t < 0 for i = I + 1, I + 2,..., m coefficients c Li. (44) (45) (46) 25 / 56

26 If I = I(x, t) is the maximum value of i for which x λ i t > 0, then Q(x, t) = I m c ir R i + c il R i (47) i=1 i=i+1 and the claimed result is thus proved. Corollary. The solution of the Riemann problem may be expressed as Q(x, t) = Q L + I δ i R i = Q R i=1 m i=i+1 δ i R i, (48) where C = [δ 1,..., δ i,..., δ m ] T, R C = Q = Q R Q L and m δ i R i = Q. (49) i=1 This form is more convenient. We only need to solve one linear system. Proof. Left as exercise. 26 / 56

27 x t = λ I t R i x t = λ I+1 x t = λ 1 ˆt (ˆx, ˆt) x t = λ m R 0 R m Q L Q R x = 0 ˆx x Fig. 3. The solution of the Riemann problem at a point (ˆx, ˆt) depends on the associated index I = I(ˆx, ˆt). 27 / 56

28 Concluding Remarks We have studied the basic mathematical aspects of linear systems of hyperbolic equations in one space dimension. This background on linear hyperbolic equations is useful for studying simplified models for practical problems. The linear theory is also useful background for studying non-linear hyperbolic equations 28 / 56

29 Case study I: Linearised Shallow Water Equations 29 / 56

30 Linearization Consider the one-dimensional non-linear shallow water equations in terms of physical variables: water depth h(x, t) and particle velocity u(x, t) } t h + u x h + h x u = 0, t u + u x u + g x h = gb (50) (x). z Air H(x, t) = b(x) + h(x, t) u(x, t) h(x, t) Water z = 0 Solid Fig. 4. Illustration of the shallow water equations. The source term involves the slope b (x) of the bottom elevation and the acceleration due to gravity g. We are interested in a linearised version of (50), without the source term (homogeneous), b (x) = 0. b(x) x 30 / 56

31 The linear equations Consider small perturbations η(x, t) and v(x, t) in surface elevation and in particle velocity as follows h(x, t) = H + η(x, t), u(x, t) = 0 + v(x, t). (51) H is the, constant, unperturbed water depth and 0 + v(x, t) means a small perturbation v(x, t) to stationary fluid It is also assumed that derivatives of the perturbations are small Then by substituting h and u from equations (51) into (50) and neglecting second order terms we obtain the linearized shallow water equations t η + H x v = 0, (52) t v + g x η = / 56

32 In matrix form the equations read t Q + A x Q = 0, (53) where Q(x, t) and A are respectively [ ] η Q =, A = v [ 0 H g 0 ]. (54) We note in passing that equations (52) reproduce the well-known linear second-order wave equation, for both η and v, namely and (2) t η = gh (2) x η (55) (2) t v = gh (2) x v. (56) The second-order linear wave equation, either (55) for η or (56) for v, is a very popular hyperbolic model for wave propagation phenomena. 32 / 56

33 The eigenstructure The eigenvalues of the matrix A in (53) are obtained from the characteristic polynomial, yielding where λ 1 = a, λ 2 = a, (57) a = gh (58) is the celerity, the speed of propagation of vanishing small-amplitude surface waves. A left eigenvector L = [l 1, l 2 ] of A corresponding to an eigenvalue λ is found from [ ] 0 H [l 1, l 2 ] = [λl g 0 1, λl 2 ], (59) which leads to the algebraic equations l 2 g = λl 1, l 1 H = λl 2. (60) 33 / 56

34 These two equations are not independent (verify). Using the second equation we may write l 2 = H λ l 1. (61) Now, setting l 1 = α 1 and λ = λ 1 = a in (61) gives the two components of the left eigenvector L 1 corresponding to the eigenvalue λ 1 = a, as l 1 = α 1, l 2 = H a α 1, (62) where α 1 is a scaling parameter open to choice. To find the left eigenvector L 2 corresponding to λ = λ 2 = a we set l 1 = α 2 and λ = λ 2 = a in (61) to obtain l 1 = α 2, l 2 = H a α 2, (63) where α 2 is again a scaling parameter open to choice. Then the two left eigenvectors corresponding to the eigenvalues λ 1 = a and λ 2 = a are respectively given by L 1 = α 1 [1, H a ], L 2 = α 2 [1, H a ]. (64) 34 / 56

35 Analogously, a right eigenvector [r 1, r 2 ] T corresponding to an eigenvalue λ satisfies [ ] [ ] [ ] 0 H r1 λr1 =, (65) g 0 r 2 λr 2 from which we obtain the relation Setting r 1 = β 1 and λ = λ 1 = a in (66) gives Setting r 1 = β 2 and λ = λ 2 = a in (66) gives r 2 = λ H r 1. (66) r 1 = β 1, r 2 = a H β 1. (67) r 1 = β 2, r 2 = a H β 2. (68) Then the right eigenvectors corresponding to the eigenvalues α 1 = a and α 2 = a are 1 1 R 1 = β 1, R 2 = β 2. (69) a/h a/h 35 / 56

36 If we want the left and right eigenvectors to be orthonormal, then the scaling parameters must be chosen to satisfy α 1 β 1 = 1/2, α 2 β 2 = 1/2. (70) Choosing α 1 = α 2 = 1 gives β 1 = β 2 = 1 2. Then the matrices L and R formed by the left and right eigenvectors are given by 1 H/a L =, R = (71) 2 1 H/a a/h a/h It is easy to show that R 1 = L, (72) where the matrix R 1 denotes the inverse matrix of R (verify). 36 / 56

37 Equations in characteristic variables First we define the characteristic variables C = [c 1, c 2 ] T as C = LQ. (73) Note that we could also use C = R 1 Q, but (73) is more direct. For our system, written in full, we have c 1 1 H/a η =, (74) 1 H/a v c 2 which gives the characteristic variables as c 1 = η H a v, c 2 = η + H a v. (75) 37 / 56

38 In terms of the characteristic variables C we have t Q = L 1 t C, x Q = L 1 x C. (76) Then (53) becomes L 1 t C + AL 1 x C = 0. (77) Multiplying (77) from the left by L gives t C + (LAL 1 ) x C = 0. (78) It is easily verified that [ LAL 1 λ1 0 = Λ = 0 λ 2 ] = [ a 0 0 a ] (79) and thus the equations in characteristic variables become t C + Λ x C = 0. (80) 38 / 56

39 The equations have become completely decoupled, namely t c 1 + λ 1 x c 1 = 0, or The general initial-value problem t c 2 + λ 2 x c 2 = 0, t c 1 a x c 1 = 0, t c 2 + a x c 2 = 0. PDEs: t Q + A x Q = 0, ICs: Q(x, 0) = Q (0) (x). (81) (82) (83) Here, the initial condition Q (0) (x) at time t = 0 is an arbitrary function of x alone. 39 / 56

40 We may now replace the IVP (83) by the equivalent IVP PDEs: t C + Λ x C = 0, ICs: C(x, 0) = C (0) (x) = LQ (0) (x) (84) in terms of characteristic variables C. From (75) the initial conditions are c (0) 1 (x) = η(0) (x) H a v(0) (x), (85) c (0) 2 (x) = η(0) (x) + H a v(0) (x). Hence the solution of (84) is c 1 (x, t) = c (0) 1 (x λ 1t) = η (0) (x + at) H a v(0) (x + at), c 2 (x, t) = c (0) 2 (x λ 2t) = η (0) (x at) + H a v(0) (x at). (86) 40 / 56

41 In terms of the original variables Q =RC the complete solution to the original IVP (83) is η(x, t) = 1 { η (0) (x + at) + η (0) (x at) + H } 2 a [ v(0) (x + at) + v (0) (x at)], v(x, t) = 1 2 (Verify). { v (0) (x + at) + v (0) (x at) + a } H [ η(0) (x + at) + η (0) (x at)]. (87) 41 / 56

42 The Riemann problem PDEs: t Q + A x Q = 0, Q L if x < 0, ICs: Q(x, 0) = Q (0) (x) = Q R if x > 0. The problem is to find Q, the solution in region R 1 : Star Region. (88) Star region t x λ 1t = 0 x λ 2t = 0 R 1 R 0 R 2 Q L Q R Fig. 5. Structure of the solution of the Riemann problem for the linearized shallow water equations. 0 x 42 / 56

43 From (48) the solution at any point (x, t) can be written as Q(x, t) = Q L + I δ i R i, (89) i=1 where the positive integer I(x, t) is such that for λ I < x t < λ I+1 (90) The coefficients δ i are the solution of the m m linear system m δ i R i = Q R Q L = Q. (91) i=1 Here m = 2 and the linear algebraic system is η η R η L δ 1 + δ 2 = =. (92) 1 2 a/h 1 2 a/h v v R v L 43 / 56

44 This gives The solution is δ 1 + δ 2 = 2 η, δ 1 + δ 2 = 2 H a v. (93) δ 1 = η H a v, δ 2 = η + H v. (94) a Here we are interested in the solution Q = [η, v ] T in the star region R 1, which is given by the points (x, t) such that λ 1 = a < x t < λ 2 = a, (95) with I = 1 in (90). See also (46). Then the solution is given by Q = Q L + δ 1 R 1. (96) 44 / 56

45 This gives η = 1 2 (η L + η R ) 1 H 2 a (v R v L ), v = 1 2 (v L + v R ) 1 a 2 H (η R η L ). A numerical example. Initial conditions η L = 1.0m, η R = 0.1m, v L = 0.0m/s, v R = 0.0m/s. Initial discontinuity at time t = 0s is placed at x 0 = 500m. As parameters, choose H = 10m and g = 9.8m/s 2, so that the celerity is a = 9.9m/s. The exact solution in the star region between the two waves is η = 0.55m; v = m/s. Solution profiles are shown in Fig. 6 at time t = 25s. (97) 45 / 56

46 Distance along channel Free surface perturbation Velocity perturbation Distance along channel Fig. 6. Solution of Riemann for the linearised shallow water equations. Free-surface elevation and velocity at time t = 25s. 46 / 56

47 Case study II: Linearised Blood Flow Equations 47 / 56

48 Equations: conservation of mass and momentum Blood flow in medium-size to large arteries and veins can be represented by the non-linear system of hyperbolic equations } t A + x (ua) = 0 t (ua) + x (Au 2 ) + A ρ (98) xp = Ru. Assumed axially symmetric vessel configuration in 3D at time t. Cross-sectional area A(x, t) and wall thickness h 0 (x) are illustrated 48 / 56

49 Unknowns and a closure condition: tube law A(x, t): cross-sectional area of the vessel at position x and time t u(x, t): averaged velocity of blood at a cross section p(x, t) is pressure Blood density ρ is constant and R > 0, the viscous resistance, prescribed There are two PDEs (98) and three unknowns: A(x, t), u(x, t) and p(x, t) An extra relation is required to close the system: the tube law. This relates p(x, t) to wall displacement via A(x, t), thus coupling elastic properties of the vessel to the fluid dynamics inside the vessel where p = p e (x, t) + ψ(a; K) (99) ψ(a; K) = p p e p trans (100) is the transmural pressure, the difference between the pressure in the vessel, the internal pressure, and the external pressure. 49 / 56

50 More on the tube law Here we adopt ( ) ψ(a; K) = K(x) A A0, (101) with K(x) = π E(x)h 0 (x) (1 ν 2. (102) ) A0 (x) A 0 (x) is the cross-sectional area of the vessel at equilibrium, that is when u = 0; h 0 (x) is the vessel wall thickness; E(x) is the Young s modulus of elasticity of the vessel and ν is the Poisson ratio, taken to be ν = 1/2. The external pressure is assumed to be a known function of space and time and may be decomposed as follows p e (x, t) = p atm + p musc (x, t), (103) where p atm is the atmospheric pressure, assumed constant here, and p musc (x, t) is the pressure exerted by the surrounding tissue. 50 / 56

51 Simplified model in conservative form Assume h 0 = constant; A 0 = constant; E = constant. Therefore K in (102) is constant. We also assume p ext = constant and R = 0. Then and thus A ρ xp in (98) is x p = K 2 A xa (104) A ρ xp = K 3ρ xa 3/2. (105) Then (98) can be written in conservation-law form as where t Q + x F(Q) = 0, (106) [ ] [ ] q1 A Q = q 2 Au [ ] f1 Au F(Q) = Au 2 + K 3ρ A3/2 f 2 (107) 51 / 56

52 A linear model A linearised version of (98) is obtained as follows: Consider a small perturbation a(x, t) of the equilibrium cross-sectional area A 0 and a small velocity perturbation v(x, t) of a stationary flow A(x, t) = A 0 + a(x, t), u(x, t) = 0 + v(x, t). (108) Assume a(x, t) is small compared to A 0, that v is small and that derivatives of the perturbations are also small and thus products of these small quantities can be neglected. Then by substituting A(x, t) and u(x, t) from equations (108) into equations (98) and neglecting second order terms we obtain a system of linearised blood flow equations where t a + A 0 x v = 0, t v + c2 0 A 0 x a = 0, (109) c 0 = K A 0 2ρ : wave velocity, constant. (110) 52 / 56

53 Tasks P0: Verify that the equations (109) in matrix form read t Q + M x Q = 0, (111) where the vector of unknowns Q(x, t) and the coefficient matrix M are respectively given by [ ] a Q =, M = 0 A 0 c v 2 0. (112) 0 A 0 P1: Justify the derivation of the linearised equations explaining each step. P2: Find the eigenvalues of the linear system. P3: Find the corresponding left eigenvectors with general scaling coefficients α 1, α / 56

54 P4: Find the corresponding right eigenvectors with general scaling coefficients β 1, β 2. P5: Find relations for the coefficients α i, β j so that the left and right eigenvectors are orthonormal. P6: Verify that R 1 = L, where L is the matrix of left eigenvectors and R is the matrix of right eigenvectors, suitably normalized. P7: Find the characteristic variables. P8: Solve analytically the general initial value problem PDEs: t Q + M x Q = 0, ICs: Q(x, 0) = Q (0) (x). Here, the initial condition Q (0) (x) at time t = 0 is an arbitrary function of x alone. (113) 54 / 56

55 P9: Consider the Riemann problem PDEs: t Q + M x Q = 0, ICs: Q(x, 0) = Q (0) (x) = { QL if x < 0, Q R if x > 0, (114) where Q L and Q R are any two constant vectors. Solve the Riemann problem exactly, verifying that the solution is actually given by a = 1 2 (a L + a R ) 1 A 0 (v R v L ), 2 c 0 v = 1 2 (v L + v R ) 1 c (115) 0 (a R a L ). 2 A 0 P10: Invent an example for the Riemann problem (114) and plot the exact solution at a given time ˆt of your choice. 55 / 56

56 Exercises for the Linear Hyperbolic Systems 56 / 56

57 Problem 1: linear system with source terms. Consider the abstract inhomogeneous linear system t Q + A x Q = S(Q), (116) where the vector of unknowns Q(x, t), the constant coefficient matrix A and the source term vector S(Q) are Q = [ q1 q 2 ], A = [ 0 a a 0 ], S = [ s1 1 Show that the eigenvalues of the system are s 2 ], a > 0. (117) λ 1 = a, λ 2 = a. (118) 2 Show that the corresponding right eigenvectors are 1 1 R 1 = α 1, R 2 = α 2. (119) / 56

58 3 Show that the corresponding left eigenvectors are L 1 = β 1 [1, 1], L 2 = β 2 [1, 1]. (120) 4 Show that imposing orthonormality of left and right eigenvectors leads to the condition α 1 β 1 = 1/2, α 2 β 2 = 1/2. (121) 5 Choose α 1 = α 2 = 1 and β 1 = β 2 = 1/2. Show that the inverse matrix of R formed by the columns of right eigenvectors is R 1 = (122) Note that L = R 1, where L is the matrix whose rows are the left eigenvectors of A. Problem 2: characteristic variables. 56 / 56

59 1 Show that the vector of characteristic variables is 1 c 1 2 (q 1 q 2 ) C = =. (123) 1 c 2 2 (q 1 + q 2 ) 2 Show that decoupled equations with source terms in characteristic variables are t c 1 a x c 1 = 1 2 (s 1 s 2 ), (124) t c 2 + a x c 2 = 1 2 (s 1 + s 2 ). See equation (24). 3 Show that if the original source term in (116) is chosen as S = 1 (γ 1 + γ 2 )q 1 + (γ 2 γ 1 )q 2, (125) 2 (γ 2 γ 1 )q 1 + (γ 1 + γ 2 )q 2 where γ 1 and γ 2 are any two arbitrary real numbers, then the 56 / 56

60 decoupled inhomogeneous system reads t c 1 a x c 1 = γ 1 c 1, t c 2 + a x c 2 = γ 2 c 2. (126) Problem 3: solutions with source terms. 1 Show that the exact solution of the IVP above for the inhomogeneous system (126) is c 1 (x, t) = c (0) 1 (x + at)eγ 1t, c 2 (x, t) = c (0) 2 (x at)eγ 2t, where c (0) 1 (x) and c(0) 2 (x) are the initial conditions for the characteristic variables. (127) 2 Find the final expression for the corresponding solution in terms of the original variables. 56 / 56

61 Problem 4: a computational problem. Choose a = 1, a spatial domain [ 5, 5] and initial conditions q 1 (x, 0) = αe βx2, q 2 (x, 0) = 0, with α = 1 and β = 8. 1 For γ 1 = γ 2 = 1 plot the exact solutions q 1 (x, T ) and q 2 (x, T ) at the output times T = 1, T = 2 and T = 3. 2 For γ 1 = 1 and γ 2 = 5 exact plot the solutions q 1 (x, T ) and q 2 (x, T ) at the output times T = 1, T = 2 and T = 3. 3 Solve the homogeneus problem numerically and plot the numerical (in symbols) and the exact (in full line) solutions q 1 (x, T ) and q 2 (x, T ) at the output times T = 1, T = 2 and T = 3. Problem 5: electrical transmission line. Consider an electrical transmission line. The problem is to determine the current I(x, t) and the potential E(x, t) as functions of space and time. These quantities are solutions of the following system of linear hyperbolic equations with source terms t I(x, t) + 1 L xe(x, t) = R L I, (128) t E(x, t) + 1 C xi(x, t) = G C E, 56 / 56

62 where C is the capacitance to ground per unit length, G is the conductance to ground per unit length, R is the resistance per unit length and L is the inductance per unit length. Further details on these equations are found in [?]. 1 Find eigenvalues, left and right eigenvectors. 2 Apply the orthonormality condition to determine the choice of the scaling parameters in the left and right eigenvectors above. 3 Find the characteristic variables. 4 Write the equations in characteristic variables, keeping the appropriate source terms. Problem 6:. Consider initial-value problem for a distorsionless line, which is required to satisfy the condition RC = LG. 1 Verify that the governing equations are t I(x, t) + 1 L xe(x, t) = βi, t E(x, t) + 1 C xi(x, t) = βe, (129) 56 / 56

63 with β = R L. (130) 2 Write system (129) in terms of characteristic variables. 3 Suppose that the initial distribution of the current I(x, 0) = I (0) (x) and the potential E(x, 0) = E (0) (x) are known functions of distance x. Using the canonical form of the equations (or equations in characteristic variables) show that the exact solution of the general initial-value problem is and I(x, t) = [ I (0) (x λ 1 t) + I (0) (x λ 2 t) ] e βt C L [E(0) (x λ 1 t) E (0) (x λ 2 t)]e βt E(x, t) = 1 2 [E(0) (x λ 1 t) + E (0) (x λ 2 t)]e βt 1 2 L C [I(0) (x λ 1 t) I (0) (x λ 2 t)]e βt. (131) (132) 56 / 56

64 Problem 7: the general Riemann problem. Consider the general Riemann problem for the homogeneous version (no source terms) of equations (128) in which the initial conditions are { I(x, 0) = I (0) IL if x < 0, (x) = I R if x > 0, E(x, 0) = E (0) (x) = { EL if x < 0, E R if x > 0. (133) 1 Display the structure of the solution of the Riemann problem through a figure in the x-t plane identifying the waves present and the unknown region of space-time. 2 Show that the solution in the star region is given as follows I = 1 2 (I L + I R ) 1 2 (E R E L )/B, E = 1 2 (E L + E R ) 1 2 (I R I L )B, (134) where B = L/C. 56 / 56

65 3 Consider a spatial domain [ 100, 100], with the following specific initial conditions { IL = 1 if x < 0, I(x, 0) = I R = 3/4 if x < 0, { (135) EL = 1/4 if x < 0, E(x, 0) = E R = 5/4 if x > 0. Assume C = G = R = L = 1, for simplicity. Show that the solution in the star region is I = and I = Plot the solution profiles for I(x, T ) and E(x, T ) at the output times T = 12.5 and T = / 56

Notes: Outline. Diffusive flux. Notes: Notes: Advection-diffusion

Notes: Outline. Diffusive flux. Notes: Notes: Advection-diffusion Outline This lecture Diffusion and advection-diffusion Riemann problem for advection Diagonalization of hyperbolic system, reduction to advection equations Characteristics and Riemann problem for acoustics

More information

Non-linear Scalar Equations

Non-linear Scalar Equations Non-linear Scalar Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro August 24, 2014 1 / 44 Overview Here

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Introduction to Hyperbolic Equations The Hyperbolic Equations n-d 1st Order Linear

More information

Waves in a Shock Tube

Waves in a Shock Tube Waves in a Shock Tube Ivan Christov c February 5, 005 Abstract. This paper discusses linear-wave solutions and simple-wave solutions to the Navier Stokes equations for an inviscid and compressible fluid

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

ENGI Second Order Linear ODEs Page Second Order Linear Ordinary Differential Equations

ENGI Second Order Linear ODEs Page Second Order Linear Ordinary Differential Equations ENGI 344 - Second Order Linear ODEs age -01. Second Order Linear Ordinary Differential Equations The general second order linear ordinary differential equation is of the form d y dy x Q x y Rx dx dx Of

More information

Lucio Demeio Dipartimento di Ingegneria Industriale e delle Scienze Matematiche

Lucio Demeio Dipartimento di Ingegneria Industriale e delle Scienze Matematiche Scuola di Dottorato THE WAVE EQUATION Lucio Demeio Dipartimento di Ingegneria Industriale e delle Scienze Matematiche Lucio Demeio - DIISM wave equation 1 / 44 1 The Vibrating String Equation 2 Second

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS

AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS 1 / 31 AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS Linearization and Characteristic Relations 1 / 31 AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

Matrix Solutions to Linear Systems of ODEs

Matrix Solutions to Linear Systems of ODEs Matrix Solutions to Linear Systems of ODEs James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 3, 216 Outline 1 Symmetric Systems of

More information

Two Dimensional Linear Systems of ODEs

Two Dimensional Linear Systems of ODEs 34 CHAPTER 3 Two Dimensional Linear Sstems of ODEs A first-der, autonomous, homogeneous linear sstem of two ODEs has the fm x t ax + b, t cx + d where a, b, c, d are real constants The matrix fm is 31

More information

2nd-Order Linear Equations

2nd-Order Linear Equations 4 2nd-Order Linear Equations 4.1 Linear Independence of Functions In linear algebra the notion of linear independence arises frequently in the context of vector spaces. If V is a vector space over the

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

dx n a 1(x) dy

dx n a 1(x) dy HIGHER ORDER DIFFERENTIAL EQUATIONS Theory of linear equations Initial-value and boundary-value problem nth-order initial value problem is Solve: a n (x) dn y dx n + a n 1(x) dn 1 y dx n 1 +... + a 1(x)

More information

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Applications of Eigenvalues and Eigenvectors 22.2 Introduction Many applications of matrices in both engineering and science utilize eigenvalues and, sometimes, eigenvectors. Control theory, vibration

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

What s Eigenanalysis? Matrix eigenanalysis is a computational theory for the matrix equation

What s Eigenanalysis? Matrix eigenanalysis is a computational theory for the matrix equation Eigenanalysis What s Eigenanalysis? Fourier s Eigenanalysis Model is a Replacement Process Powers and Fourier s Model Differential Equations and Fourier s Model Fourier s Model Illustrated What is Eigenanalysis?

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

3 2 6 Solve the initial value problem u ( t) 3. a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1

3 2 6 Solve the initial value problem u ( t) 3. a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1 Math Problem a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1 3 6 Solve the initial value problem u ( t) = Au( t) with u (0) =. 3 1 u 1 =, u 1 3 = b- True or false and why 1. if A is

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Newtonian Mechanics. Chapter Classical space-time

Newtonian Mechanics. Chapter Classical space-time Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles

More information

+ i. cos(t) + 2 sin(t) + c 2.

+ i. cos(t) + 2 sin(t) + c 2. MATH HOMEWORK #7 PART A SOLUTIONS Problem 7.6.. Consider the system x = 5 x. a Express the general solution of the given system of equations in terms of realvalued functions. b Draw a direction field,

More information

APPPHYS 217 Tuesday 6 April 2010

APPPHYS 217 Tuesday 6 April 2010 APPPHYS 7 Tuesday 6 April Stability and input-output performance: second-order systems Here we present a detailed example to draw connections between today s topics and our prior review of linear algebra

More information

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp(

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp( PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS First Order Equations 1. Linear y + p(x)y = q(x) Muliply through by the integrating factor exp( p(x)) to obtain (y exp( p(x))) = q(x) exp( p(x)). 2. Separation of

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

Linear Algebra 1 Exam 2 Solutions 7/14/3

Linear Algebra 1 Exam 2 Solutions 7/14/3 Linear Algebra 1 Exam Solutions 7/14/3 Question 1 The line L has the symmetric equation: x 1 = y + 3 The line M has the parametric equation: = z 4. [x, y, z] = [ 4, 10, 5] + s[10, 7, ]. The line N is perpendicular

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Math 21b Final Exam Thursday, May 15, 2003 Solutions Math 2b Final Exam Thursday, May 5, 2003 Solutions. (20 points) True or False. No justification is necessary, simply circle T or F for each statement. T F (a) If W is a subspace of R n and x is not in

More information

March Algebra 2 Question 1. March Algebra 2 Question 1

March Algebra 2 Question 1. March Algebra 2 Question 1 March Algebra 2 Question 1 If the statement is always true for the domain, assign that part a 3. If it is sometimes true, assign it a 2. If it is never true, assign it a 1. Your answer for this question

More information

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition). Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition). Bryan Félix Abril 12, 217 Section 11.1 Exercise 6. Let V be a vector space of finite dimension. If ϕ is any linear transformation

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Going with the flow: A study of Lagrangian derivatives

Going with the flow: A study of Lagrangian derivatives 1 Going with the flow: A study of Lagrangian derivatives Jean-Luc Thiffeault Department of Applied Physics and Applied Mathematics Columbia University http://plasma.ap.columbia.edu/~jeanluc/ 12 February

More information

Solving linear and nonlinear partial differential equations by the method of characteristics

Solving linear and nonlinear partial differential equations by the method of characteristics Chapter IV Solving linear and nonlinear partial differential equations by the method of characteristics Chapter III has brought to light the notion of characteristic curves and their significance in the

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro September 19, 2014 1 / 55 Motivation

More information

A Multiphysics Strategy for Free Surface Flows

A Multiphysics Strategy for Free Surface Flows A Multiphysics Strategy for Free Surface Flows Edie Miglio, Simona Perotto, and Fausto Saleri MOX, Modeling and Scientific Computing, Department of Mathematics, Politecnico of Milano, via Bonardi 9, I-133

More information

Basics on Numerical Methods for Hyperbolic Equations

Basics on Numerical Methods for Hyperbolic Equations Basics on Numerical Methods for Hyperbolic Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro October 8,

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9 Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is

More information

Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine Lecture 2 The wave equation Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine V1.0 28/09/2018 1 Learning objectives of this lecture Understand the fundamental properties of the wave equation

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

20D - Homework Assignment 5

20D - Homework Assignment 5 Brian Bowers TA for Hui Sun MATH D Homework Assignment 5 November 8, 3 D - Homework Assignment 5 First, I present the list of all matrix row operations. We use combinations of these steps to row reduce

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures.

More information

Characteristics for IBVP. Notes: Notes: Periodic boundary conditions. Boundary conditions. Notes: In x t plane for the case u > 0: Solution:

Characteristics for IBVP. Notes: Notes: Periodic boundary conditions. Boundary conditions. Notes: In x t plane for the case u > 0: Solution: AMath 574 January 3, 20 Today: Boundary conditions Multi-dimensional Wednesday and Friday: More multi-dimensional Reading: Chapters 8, 9, 20 R.J. LeVeque, University of Washington AMath 574, January 3,

More information

1 Systems of Differential Equations

1 Systems of Differential Equations March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

More information

1 Review of simple harmonic oscillator

1 Review of simple harmonic oscillator MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

Lecture 6. Eigen-analysis

Lecture 6. Eigen-analysis Lecture 6 Eigen-analysis University of British Columbia, Vancouver Yue-Xian Li March 7 6 Definition of eigenvectors and eigenvalues Def: Any n n matrix A defines a LT, A : R n R n A vector v and a scalar

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Linear Algebra Exercises

Linear Algebra Exercises 9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.

More information

The second-order 1D wave equation

The second-order 1D wave equation C The second-order D wave equation C. Homogeneous wave equation with constant speed The simplest form of the second-order wave equation is given by: x 2 = Like the first-order wave equation, it responds

More information

ODE. Philippe Rukimbira. Department of Mathematics Florida International University PR (FIU) MAP / 92

ODE. Philippe Rukimbira. Department of Mathematics Florida International University PR (FIU) MAP / 92 ODE Philippe Rukimbira Department of Mathematics Florida International University PR (FIU) MAP 2302 1 / 92 4.4 The method of Variation of parameters 1. Second order differential equations (Normalized,

More information

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

systems of linear di erential If the homogeneous linear di erential system is diagonalizable, G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

More chapter 3...linear dependence and independence... vectors

More chapter 3...linear dependence and independence... vectors More chapter 3...linear dependence and independence... vectors It is important to determine if a set of vectors is linearly dependent or independent Consider a set of vectors A, B, and C. If we can find

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Section 8.2 : Homogeneous Linear Systems

Section 8.2 : Homogeneous Linear Systems Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components a ij. An eigenvector of A is a nonzero n 1 column vector v such that Av

More information

Higher-order ordinary differential equations

Higher-order ordinary differential equations Higher-order ordinary differential equations 1 A linear ODE of general order n has the form a n (x) dn y dx n +a n 1(x) dn 1 y dx n 1 + +a 1(x) dy dx +a 0(x)y = f(x). If f(x) = 0 then the equation is called

More information

PH.D. PRELIMINARY EXAMINATION MATHEMATICS

PH.D. PRELIMINARY EXAMINATION MATHEMATICS UNIVERSITY OF CALIFORNIA, BERKELEY SPRING SEMESTER 207 Dept. of Civil and Environmental Engineering Structural Engineering, Mechanics and Materials NAME PH.D. PRELIMINARY EXAMINATION MATHEMATICS Problem

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations

ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations ENGI 940 Lecture Notes 4 - Stability Analysis Page 4.01 4. Stability Analysis for Non-linear Ordinary Differential Equations A pair of simultaneous first order homogeneous linear ordinary differential

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i

Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i Complex Numbers: Definition: A complex number is a number of the form: z = a + bi where a, b are real numbers and i is a symbol with the property: i 2 = 1 Sometimes we like to think of i = 1 We can treat

More information

Cranfield ^91. College of Aeronautics Report No.9007 March The Dry-Bed Problem in Shallow-Water Flows. E F Toro

Cranfield ^91. College of Aeronautics Report No.9007 March The Dry-Bed Problem in Shallow-Water Flows. E F Toro Cranfield ^91 College of Aeronautics Report No.9007 March 1990 The Dry-Bed Problem in Shallow-Water Flows E F Toro College of Aeronautics Cranfield Institute of Technology Cranfield. Bedford MK43 OAL.

More information

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,...

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,... 6 Eigenvalues Eigenvalues are a common part of our life: vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow, The simplest

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Modeling and Simulation with ODE for MSE

Modeling and Simulation with ODE for MSE Zentrum Mathematik Technische Universität München Prof. Dr. Massimo Fornasier WS 6/7 Dr. Markus Hansen Sheet 7 Modeling and Simulation with ODE for MSE The exercises can be handed in until Wed, 4..6,.

More information

Linear Differential Equations. Problems

Linear Differential Equations. Problems Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =

More information