8 A pseudo-spectral solution to the Stokes Problem
|
|
- Allison Fowler
- 5 years ago
- Views:
Transcription
1 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω, (S-I) u Γ = u Γ on Γ := Ω where σ 0, Ω R 2 is a bounded domain, u : Ω R 2 is the velocity field and p : Ω R is the pressure. It is worth to stress that this particular instance of the Stokes Problem may arise when one deals with a Stokes evolution equation and uses numerical integration: a problem like (S-I) must be solved aty each time step. First we observe that u = u = (0) = 0 thus the first two equations in (S-I) implies p = f. Also we introduce the Helmotz operator H σ := σid of parameter σ so we can re-write (S-I) in the form H σ u p = f in Ω p = f in Ω, (S-II) u Γ = u Γ on Γ := Ω u = 0 on Γ It is rather important to note that (at least at this stage, i.e., no discretization procedure performed so far) the latter formulation implies the first one. For (let us reason in term of classical solutions for simplicity), set Q := û, where û solves (S-II). It follows that Q solves Q = σq in Ω Q = 0 on Γ, thus û = Q 0 in Ω, that is û is divergence-free and thus solves (S-I) as well Rough Chebyshev-Chebyshev discretization We focus on the very simple case of Ω =] 1, 1[ 2. This leads to use Chebyshev polynomials as basis functions. Let us denote by P 2 the space of tensor product polynomials of degree at most in each of the two variables separately. 155
2 We will use the Chebyshev-Lobatto quadrature nodes ( ) πi x i = cos, i = 0, 1,..., ( ) πj y j = cos, j = 0, 1,..., and define the following computational domains Ω := (x i, y j ) : (i, j) 1, 2,..., 1} 1, 2,..., 1}}, Ω := (x i, y j ) : (i, j) 0, 1,..., } 0, 1,..., }}, Ω int := (x i, y j ) Ω : (i, j) / 0, } 0, }}, Γ := (x i, y j ) : (i, j) 0, } 0, 1,..., } 0, 1,..., } 0, }}, Γ int := (x i, y j ) Γ : (i, j) / 0, } 0, }}, corresponding respectively to interior nodes all nodes all nodes but the corners boundary nodes all boundary nodes but the corners. The resulting discretized problem becomes: find u P 2 P 2 and p P 2 such that, denoting by f inp 2 P 2 the interpolating polynomial of f on Ω, we have H σ u p = f p = f u Γ = u Γ u = 0 in Ω in Ω, (S-CC) We stress that, as usual in PS methods, the derivatives are taken in the interpolation sense, that is one first interpolates than takes derivatives. 156
3 8.1.3 A variational crime In this discretization procedure a remarkable phenomena comes into play: even if any classical solution u of (S-II) is a divergence free vector field, its approximation u does not need to fulfil the same property. This is a consequence of imposing the differential equations in the strong sense point-wise. Remember that, when showing that a solution to (S-II) needs to be divergence free, we used that the laplacian of an identically vanishing function is zero. Here equations are satisfied only on a finite sets of points so we can t perform the same reasoning: in general u 0 on Ω. Recall that the divergence free condition represent the conservation of the mass, hence trying to preserve such a condition seems very reasonable. More concretely, as often occurs, the lack of physical meaning of the discretized equation (S-CC) boils down to numerical instability in solving the steady state problem (S-I) and even worse behaviour of the final numerical solution if (S-I) arises as a intermediate problem in the time integration of an evolution equation. We need to overcome this problem. We introduce a numerical flux B P 2 P 2 such that its normal component at the boundary is precisely the normal component of the residual, more precisely we replace the problem (S-CC) with H σ u p = f p = f B B = 0 u Γ = u Γ B ν = (H σ u p f ) ν in Ω in Ω in Ω. (S-CC ) Here ν is the outer unit normal. At this stage it is not clear why this variational crime should enforce the divergence free condition on u, indeed this will surface out in a while when presenting the Influence Matrix Method that we use to solve (S-CC ) Influence Matrix We split (S-CC ) in two sub problems. More precisely, we assume that u = ũ + ū, p = p + p, B = B + B, where each and function solves respectively the or the problem below. 157
4 p = f in Ω p = 0 H σ ũ = f + p, ( problem I) in Ω, ( problem II) ũ = u Γ B = 0 in Ω B ν = g := (H σ ũ f p ) ν, ( problem III) B τ = 0 Here τ is the unit tangent. p = ( B + B ) H σ ū = p ū = 0 ū = ũ B ν = (H σ ū p )ν in Ω in Ω ( problem) First, notice that the problems I, II and III can be considered separately in this order. We compute p solving ( problem I), then we compute the gradient of p, we plug it in ( problem II) and we compute ũ by solving it; finally B is computed and stored. Recall that the solution to our final problem consists in finding u and p, while the numerical flux is an additional variable whose tangential or normal values we are going to compute only when these are required in order to compute u or p. To understand that ( problem III) leads to compute B it is sufficient to re-write the problem in a more explicit form by Lagrange interpolation. Since B = ( B 1, B 2 ) P P we have B 1 (x, y j ) = l h (x) B 1 (x h, y j ) = l 0 (x) B 1 (x 0, y j ) + l (x) B 1 (x, y j ) h=0 l ( x) B 1 (x 0, y j ) + l (x) B 1 (x, y j ) l ( x)g (x 0, y j ) + l (x)g (x, y j ) (8.1) B 2 (x i, y) = l k (y) B 2 (x i, y k ) = l 0 (y) B 2 (x i, y 0 ) + l (y) B 2 (x i, y ) k=0 l ( y) B 2 (x i, y 0 ) + l (y) B 2 (x i, y ) l ( y)g (x i, y 0 ) + l (y)g (x i, y ). (8.2) 158
5 Here we used that at any point (x h, y k ) Γ int we have B1 h = 0 B B1 h = ν = B2 k = 0 k = B 1, and the fact that B vanishes at any interior point. Using (8.1), (8.2) and the symmetries of Ω we get B (x i, y j ) =l (x i )g (x 0, y j ) + l (x i )g (x, y j )+ (8.3) + l (y j )g (x i, y 0 ) + l (y j )g (x i, y ). Finally, let us notice that, for any l 1, 2,..., 1}, we have ( l (x l ) = d 1 ) x x j 1 h=0 j h, = (x l x j ) dx x x x=x j l j (x x j ) = 1 j=0 x x l j l, x l x j = ll} (x l), x x j x x l where we denoted by l l} the fundamental Lagrange interpolating polynomial relative to the -th node x built on the nodes X l} := x 0, x 1,..., x l 1, x l+1,..., x }. We warn the reader that this new Lagrange basis is no more preserving the symmetries of the starting one, since X l} is no more symmetric. Indeed one can prove that l l} (x l ) = l l} 0 (x l ). Using the above formula for the derivatives of the Lagrange basis equation (8.3) becomes B (x i, y j ) = l i} (x i ) g (x 0, y j ) + li} (x i) g (x, y j )+ x x i x x i + l j} (y j ) g (x i, y 0 ) + lj} (y j) g (x i, y ) (8.4) y y j y y j =c i g (x 0, y j ) + c i g (x, y j ) + c j g (x i, y 0 ) + c j g (x i, y ). ow we consider ( problem). We use a quite unusual method,termed superposition method. In this technique each function φ of the set p, ū, R := B ν, B} is expanded in the form φ = 2L l=1 ξ l φ l, L := CardΓ int = 2(2 2), (8.5) 159
6 but the coefficients are fixed: what varies are the φ l themselves. More precisely, we consider two families of problems, each of them of L problems. For l = 1, 2,..., L solve p l = 0 in Ω p l (η m ) = δ l,m η m Γ int ū l = p l in Ω ū l = 0 B l = 0 in Ω (8.6) (8.7) (8.8) For l = L + 1, L + 2,..., 2L solve B l = 0 in Ω B l ν(η m ) = δ L+m,l η m Γ int (8.9) p l = B l in Ω p l = 0 in Ω (8.10) H σ ū = p ū = 0 in Ω (8.11) Again, one first solves (8.6)to determine p l, l 1,..., L}, computes the gradient of the pressure and solves (8.7) by plugging it into the equation. This leads to compute ū l, l 1,..., L}, then we compute its laplacian and finally the values R l = ( ū l p l ) ν at the nodes in Γ int. ow consider the case l > L. It is worth to stress that (8.9) fully characterize the values of B l at points of Ω, this is a consequence of the tensor product construction of both the domain and the functions space we took into account: notice that the normal derivatives are polynomial of only one variable. In order to get convinced is convenient to draw a picture of Ω 3 (the easiest example), pick any l, m and write the values of B l given by (8.9) and try to compute B l (x 2, y 2 ) (the only node in Ω 3 ). Indeed in order to compute B l starting by (8.9) it is sufficient to apply equation (8.4). B l (x i, y j ) =c i δ(x 0, y j ) = η l L } + c i δ(x, y j ) = η l L }+ + c j δ(x i, y 0 ) = η l L } + c j δ(x i, y ) = η l L } (8.12) After determining the value of B l at Ω by solving (8.9), we plug it into (8.10), we solve it and we compute p l and R l (η m ) for l = L + 1, L + 2,..., 2L, m = 1,..., L. Finally we compute the gradient of p l, we plug it in (8.11) and we solve it to determine ū l and R l for l > L. 160
7 ow note that any set of functions p, ū, R, B} defined by a superposition of the form (8.5), that is for any ξ R 2L, satisfy the first three equations in ( problem); we want to determine ξ by imposing the remaining two equations. 2L l=1 ξ l ū l (η m ) = ũ (η m ) η m Γ int 2L l=1 ξ l(b l ν R l )(η m ) = ( B ν)(η m ) η m Γ int (8.13) This system may be written in the form I ξ = c, where the matrix ū 1 (η 1 ) ū 2 (η 1 ) ū 2L (η 1) ū 1 (η 2 ) ū 2 (η 2 ) ū 2L (η 2) I := ū 1 (η L ) ū 2 (η L ) ū 2L (η L) (B 1 ν R 1 )(η 1 ) (B 2 ν R 2 )(η 1 ) (B 2L ν R 2L)(η 1 ) (B 1 ν R 1 )(η 2 ) (B 2 ν R 2 )(η 2 ) (B 2L ν R 2L)(η 2 ) (B 1 ν R 1 )(η L ) (B 2 ν R 2 )(η L ) (B 2L ν R 2L)(η L ) is the Influence Matrix and ũ (η 1 ) ũ (η 2 ). ũ (η L ) c = ( B ν)(η 1 ). ( B ν)(η L ). ( B ν)(η L ) Unfortunately a problem may arise when solving the influence matrix system: in general the matrix I may be not invertible; experimentally four zero singular values appear. Two situations may occur: if c lies in the image of I the system has solution(s), otherwise there exists no solution. 8.2 Implementation The solution of the Stokes problem as proposed in the last subsection is quite complicated. In order to implement it as a software, it is convenient to split it in four steps S1 Computation of nodes, Lagrange basis and differentiation matrices, 161
8 S2 Solution of the problems, S3 Solution of 2L problems arising from the problems, S4 Assembling the influence matrix and solving the final linear system. We briefly describe each step below Step One We aim to compute the vector C := (c h ) h=1,..., 1 defined in (8.4); note that this is done working in only one variable due to the structure of equation (8.4). To ensure a stable computation we use a Chebyshev-Vandermonde approach. Let V := [T j (x i )] 0 j,i, Ṽ l := [V i,j ] 0 j 1,i l, v l := (V l, j) 0 j 1. We compute Lagrange polynomials coefficients by the interpolation conditions and then evaluate it on the desired point, that is l l} (x l) = v l (Ṽ l ) 1 e 1, e 1 = (0,..., 0, 1) T R. Obviously the solution a l = (Ṽ l ) 1 e 1 of Ṽ l a = e 1 should not be computed by the inversion of the matrix, a suitable linear solver must be chosen. ote that v l (Ṽ l ) 1 e 1 = (Ṽ l ) T vl T, e 1 hence we need to compute only the last component of the solution b l of the linear system (Vl ) T b l = vl T ; note the transpose sign, due to the fact that v l by definition, is a row. We can use the QR algorithm, if we compute the factorization (Vl ) T = QR, then the last component b l () of b l is given by b l () = ((QT ),: ) T, v T l R, = Q :,, v T l R,. Here R, is the element of R of indexes, and Q :, is the last column of Q. Finally c h = lh} (x h) 1 cos ( ) = lh} (x h) iπ 2 sin ( Q ) :,, vl T = 2 hπ 2R 2, sin ( ). 2 hπ 2 All these coefficients as well as the matrix V need to be pre-computed and stored for future use. For instance D 5 = /2 0 5/
9 The partial differentiation operator can be represented in the space of frequencies by Chebyshev differentiation matrices. These matrices can be derived by the following equation ( ) k p ( (k + p m 2)/2 d p T k (x) dx p = 2 p k m 0,k p m even (k + p m)/2 ) [(k + p m 2)/2]! T m (x). [(k + p m)/2]! Here the prime symbol on the sum means that the 0 term in the sum needs to be halved, when appearing. The formula simplifies considerably if we take into account the case p = 1 dt k (x) dx = 2k ( k p m 0,k p m even ) T m (x). (8.14) It follows by (8.14) that for any function v P P that can be written as v(x, y) = c i,j [v]t i (x)t j (y) we have x v(x, y) = y v(x, y) = 0 i,j = 0 m,j 0 i,j = 0 i,m 0 i,j c i,j [v] x T i (x)t j (y) = c m,j [ x v]t m (x)t j (y), c i,j [v]t i (x) y T j (y) = c i,m [ y v]t i (x)t m (y). 1 ( ) j=0 m=0 i=m+1,i 1 m even 1 ( ) i=0 m=0 j=m+1,j 1 m even 2ic i,j [v]t m (x)t j (j) 2jc i,j [v]t i (x)t m (j) We rewrite the above equations in a matrix form, where the conditions on the indexes are enclosed in the coefficients c i,j [ x v] = d i,hc h,j [v], c :,j [ x v] = D c :,j [v], (8.15) c i,j [ y v] = 0 h 0 k d k,jc i,k [v], c i,: [ y v] = (D c i,: [v] T ) T. (8.16) Finally, let us fix the standard lexicographical ordering for the basis, that is φ 1 (x, y) :=T 0 (x)t 0 (y) φ 2 (x, y) := T 0 (x)t 1 (y) T 0 (x)t (y) T 1 (x)t 0 (y) T (x)t (y) =: φ (+1) 2(x, y). 163
10 We denote by F [ ] : C([ 1, 1] 2 ) P P, F [v] = [c i,j [v]] 0 i,j operator and by E its inverse, the synthesis. We also need the operators the analysis I, J : R (+1)2 M (R) where I(v) is the matrix whose (i + 1, j + 1)-element is v (+1)i+j+1, while J(v) = I(v) T ; note that I 1 corresponds to the matlab (:) operator. we get v P P x v = E ( D F [v] ), (8.17) y v = E ( (D F [v] T ) T ). (8.18) Analogous formulas for the higher order derivatives can be derived simply iterating these last equations. All these matrices are termed differentiation matrices: they need to be computed once for all and stored to be used many times in the next steps. Since we are imposing both the differential and the boundary equations in a strong sense we need to consider the grid values of functions in place of the formal sum computed by E. Hence we fix a numbering of the interpolation nodes where the first L nodes are in Γ int, we identify any function v P P with v := [v(η m )] 0 m (+1) 2 and we replace E in equation (8.17) with the evaluation operator V GL (+1) 2(R) that simply is the Vandermonde matrix of order ( + 1) 2 computed at the points of Ω with the above mentioned ordering for the nodes and We have Moreover the analysis can be represented as V := [φ j (η 1 i, η 2 i )] 1 i,j (+1) 2. F = I V 1. While the evaluation operator acting on coefficients arranged in a matrix is V I 1. Thus equations (8.17) and (8.18) leads to the equations for the differentiation in the physical space. ( x v (η m )) =V I 1 D I(V 1 v ), ( y v (η m )) =V J 1 D J(V 1 v ). We can find D I, D J M (+1) (+1)(R) representing the linear operations I 1 D I( ) and J 1 D J( ) respectively, so we get ( x v (η m )) =V DI V 1 v, (8.19) ( y v (η m )) =V DJ V 1 v. (8.20) 164
11 Similar relations for higher order partial derivatives are easily obtained. ( xxv 2 (η m )) =V ( D ) I 2 V 1 v, (8.21) ( yyv 2 (η m )) =V ( D ) J 2 V 1 v. (8.22) Also we will need to compute the divergence of scalar and vector fields, to this aim we use the operators div v := V ( D I + D J )v and Steps two and three Div v := V ( D I v 1 + D J v 2 ). If we go back to Subsubsection 8.1.4, we can easily check that the steps II and III consists of solving a large number of differential problems, each of the form H σ v = f Ω (8.23) v = g where f, g, σ may vanish. ote that we formulated this last equation in the scalar form: each vectorial differential problem in Subsubsection (8.1.4) needs to be firs re-written in this form. ow we recall that we can represent, using (8.21) and (8.22), the evaluation of the Helmotz operator H σ on Ω by a composition of matrices. More precisely, denoting by V Ω the matrix V where the ( first 4 rows (relative to boundary nodes) have been removed, we have H σ v = V Ω ( D I )2 V 1 + ( D J )2 V 1 ). σi Thus the above system is equivalent to ( V Ω (( D I )2 + ( D ) J )2 V 1 σi V Γ ) v (η 1 ). v (η (+1) 2 = ( f g ) (8.24) Here V Γ is the matrix obtained starting by V and deleting all rows not corresponding to knots of Γ int. It is clear that the (rather large and) very sparse matrix L := V (( Ω D I )2 + ( D ) J )2 V 1 must be computed once for all and stored. The solution to the system may be carried out in several ways, the QR algorithm may be a good choice but iterative methods should be considered as well due to the sparsity of the system. Let us recall that the ( problem I) and ( problem II) leads to one scalar and one vector problem, while the l-problems consists of one scalar and one vector problem each. Hence 165
12 the total number of system of the type (8.24) are 6L+3 = but we should consider the evaluations of the divergence of the numerical flux (which have explicit solution, see (??)) as well. The possibility of calculating the solutions by parallel computing should be considered Step four To assemble the influence matrix we use the discrete divergence operator Div in order to represent the divergence acting on P P. The matrix representation of Div has been already computed and stored at the preprocessing stage step I. We already mentioned that the influence matrix may be not invertible. We present two methods to cope with this issue, the firs one, due to..., is a spectral regularizing technique that leads to recover a true solution if there exists one. The second is a mixed approach between exact solution and least squares approximation. Regularization method. We compute the factorization I = U T ΣV, Σ = Iσ, σ = (σ 1,..., σ 2L 4, 0, 0, 0, 0) T and we replace I by the regularized matrix I λ := U T Σ λ V, Σ λ = Iσ λ, σ = (σ 1,..., σ 2L 4, λ, λ, λ, λ) T. Then we solve the system Iλ ξ = c. ote that, if a true solution ˆξ of the original system exists, then ˆξ is a solution of the regularized system as well. Mixed approach. We divide the influence matrix I M 2L 2L (R) in two sub-matrices A, B M L L (R) B := A := I = [ A B ū 1 (η 1 ) ū 2 (η 1 ) ū 2L (η 1) ū 1 (η 2 ) ū 2 (η 2 ) ū 2L (η 2) ū 1 (η L ) ū 2 (η L ) ū 2L (η L) (B 1 ν R 1 )(η 1 ) (B 2 ν R 2 )(η 1 ) (B 2L ν R 2L)(η 1 ) (B 1 ν R 1 )(η 2 ) (B 2 ν R 2 )(η 2 ) (B 2L ν R 2L)(η 2 ) (B 1 ν R 1 )(η L ) (B 2 ν R 2 )(η L ) (B 2L ν R 2L)(η L ) We solve exactly the problem Aξ = (c 1,..., c L ) T. To do that we use the QR factorization [ ] R A = R T Q T T = 1 Q T. O 166 ],
13 We get ξ 1 := (ξ 1,..., ξ M ) T = QR T 1 c 1, M := Rank A. Then we split the matrix B as B = [B 1, B 2 ] with B 1 M L,M (R) thus the condition Bξ = c 2 is equivalent to B 1 ξ 1 + B 2 ξ 2 = c 2 B 2 ξ 2 = c 2 B 1 ξ 1. This last equation is finally solved with respect to the variable ξ 2 in the least squares sense. 167
WRT in 2D: Poisson Example
WRT in 2D: Poisson Example Consider 2 u f on [, L x [, L y with u. WRT: For all v X N, find u X N a(v, u) such that v u dv v f dv. Follows from strong form plus integration by parts: ( ) 2 u v + 2 u dx
More informationChapter 2. General concepts. 2.1 The Navier-Stokes equations
Chapter 2 General concepts 2.1 The Navier-Stokes equations The Navier-Stokes equations model the fluid mechanics. This set of differential equations describes the motion of a fluid. In the present work
More informationScientific Computing I
Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to
More informationPhysics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know
Physics 110. Electricity and Magnetism. Professor Dine Spring, 2008. Handout: Vectors and Tensors: Everything You Need to Know What makes E&M hard, more than anything else, is the problem that the electric
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Discretization of Boundary Conditions Discretization of Boundary Conditions On
More informationCartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components
Cartesian Tensors Reference: Jeffreys Cartesian Tensors 1 Coordinates and Vectors z x 3 e 3 y x 2 e 2 e 1 x x 1 Coordinates x i, i 123,, Unit vectors: e i, i 123,, General vector (formal definition to
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods
More informationLehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V
Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationA primer on matrices
A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous
More informationMathematical Preliminaries
Mathematical Preliminaries Introductory Course on Multiphysics Modelling TOMAZ G. ZIELIŃKI bluebox.ippt.pan.pl/ tzielins/ Table of Contents Vectors, tensors, and index notation. Generalization of the concept
More informationAMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends
AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends Lecture 3: Finite Elements in 2-D Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Finite Element Methods 1 / 18 Outline 1 Boundary
More informationIntroduction. Finite and Spectral Element Methods Using MATLAB. Second Edition. C. Pozrikidis. University of Massachusetts Amherst, USA
Introduction to Finite and Spectral Element Methods Using MATLAB Second Edition C. Pozrikidis University of Massachusetts Amherst, USA (g) CRC Press Taylor & Francis Group Boca Raton London New York CRC
More informationSolving the Generalized Poisson Equation Using the Finite-Difference Method (FDM)
Solving the Generalized Poisson Equation Using the Finite-Difference Method (FDM) James R. Nagel September 30, 2009 1 Introduction Numerical simulation is an extremely valuable tool for those who wish
More informationThe Hodge Star Operator
The Hodge Star Operator Rich Schwartz April 22, 2015 1 Basic Definitions We ll start out by defining the Hodge star operator as a map from k (R n ) to n k (R n ). Here k (R n ) denotes the vector space
More informationKadath: a spectral solver and its application to black hole spacetimes
Kadath: a spectral solver and its application to black hole spacetimes Philippe Grandclément Laboratoire de l Univers et de ses Théories (LUTH) CNRS / Observatoire de Paris F-92195 Meudon, France philippe.grandclement@obspm.fr
More informationOn the Ladyzhenskaya Smagorinsky turbulence model of the Navier Stokes equations in smooth domains. The regularity problem
J. Eur. Math. Soc. 11, 127 167 c European Mathematical Society 2009 H. Beirão da Veiga On the Ladyzhenskaya Smagorinsky turbulence model of the Navier Stokes equations in smooth domains. The regularity
More informationDomain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions
Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New
More informationCIV-E1060 Engineering Computation and Simulation Examination, December 12, 2017 / Niiranen
CIV-E16 Engineering Computation and Simulation Examination, December 12, 217 / Niiranen This examination consists of 3 problems rated by the standard scale 1...6. Problem 1 Let us consider a long and tall
More informationA Primer on Three Vectors
Michael Dine Department of Physics University of California, Santa Cruz September 2010 What makes E&M hard, more than anything else, is the problem that the electric and magnetic fields are vectors, and
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationOverlapping Schwarz preconditioners for Fekete spectral elements
Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,
More information21 Linear State-Space Representations
ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may
More informationChapter 1: The Finite Element Method
Chapter 1: The Finite Element Method Michael Hanke Read: Strang, p 428 436 A Model Problem Mathematical Models, Analysis and Simulation, Part Applications: u = fx), < x < 1 u) = u1) = D) axial deformation
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationOn the p-laplacian and p-fluids
LMU Munich, Germany Lars Diening On the p-laplacian and p-fluids Lars Diening On the p-laplacian and p-fluids 1/50 p-laplacian Part I p-laplace and basic properties Lars Diening On the p-laplacian and
More informationSolving Boundary Value Problems (with Gaussians)
What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationThe Spectral-Element Method: Introduction
The Spectral-Element Method: Introduction Heiner Igel Department of Earth and Environmental Sciences Ludwig-Maximilians-University Munich Computational Seismology 1 / 59 Outline 1 Introduction 2 Lagrange
More informationLecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationPDE Solvers for Fluid Flow
PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic
More informationTensors, and differential forms - Lecture 2
Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description
More informationFinal Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University
Final Examination CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University The exam runs for 3 hours. The exam contains eight problems. You must complete the first
More informationCS 468 (Spring 2013) Discrete Differential Geometry
CS 468 (Spring 2013) Discrete Differential Geometry Lecture 13 13 May 2013 Tensors and Exterior Calculus Lecturer: Adrian Butscher Scribe: Cassidy Saenz 1 Vectors, Dual Vectors and Tensors 1.1 Inner Product
More informationChapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationAn Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84
An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationMath 123, Week 2: Matrix Operations, Inverses
Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices
More informationGradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice
1 Lecture Notes, HCI, 4.1.211 Chapter 2 Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian
More informationFinite Difference Methods (FDMs) 1
Finite Difference Methods (FDMs) 1 1 st - order Approxima9on Recall Taylor series expansion: Forward difference: Backward difference: Central difference: 2 nd - order Approxima9on Forward difference: Backward
More informationNewtonian Mechanics. Chapter Classical space-time
Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More informationfield using second order edge elements in 3D
The current issue and full text archive of this journal is available at http://www.emerald-library.com using second order edge elements in 3D Z. Ren Laboratoire de GeÂnie Electrique de Paris, UniversiteÂs
More informationCartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components
Cartesian Tensors Reference: Jeffreys Cartesian Tensors 1 Coordinates and Vectors z x 3 e 3 y x 2 e 2 e 1 x x 1 Coordinates x i, i 123,, Unit vectors: e i, i 123,, General vector (formal definition to
More informationWeighted Regularization of Maxwell Equations Computations in Curvilinear Polygons
Weighted Regularization of Maxwell Equations Computations in Curvilinear Polygons Martin Costabel, Monique Dauge, Daniel Martin and Gregory Vial IRMAR, Université de Rennes, Campus de Beaulieu, Rennes,
More informationOften, in this class, we will analyze a closed-loop feedback control system, and end up with an equation of the form
ME 32, Spring 25, UC Berkeley, A. Packard 55 7 Review of SLODEs Throughout this section, if y denotes a function (of time, say), then y [k or y (k) denotes the k th derivative of the function y, y [k =
More informationLecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2
More information2. The Schrödinger equation for one-particle problems. 5. Atoms and the periodic table of chemical elements
1 Historical introduction The Schrödinger equation for one-particle problems 3 Mathematical tools for quantum chemistry 4 The postulates of quantum mechanics 5 Atoms and the periodic table of chemical
More informationLecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1
More information100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX
100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX.1 Norms If we have an approximate solution at a given point and we want to calculate the absolute error, then we simply take the magnitude
More informationThe discrete and fast Fourier transforms
The discrete and fast Fourier transforms Marcel Oliver Revised April 7, 1 1 Fourier series We begin by recalling the familiar definition of the Fourier series. For a periodic function u: [, π] C, we define
More informationMath review. Math review
Math review 1 Math review 3 1 series approximations 3 Taylor s Theorem 3 Binomial approximation 3 sin(x), for x in radians and x close to zero 4 cos(x), for x in radians and x close to zero 5 2 some geometry
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationVector, Matrix, and Tensor Derivatives
Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions
More informationMath background. Physics. Simulation. Related phenomena. Frontiers in graphics. Rigid fluids
Fluid dynamics Math background Physics Simulation Related phenomena Frontiers in graphics Rigid fluids Fields Domain Ω R2 Scalar field f :Ω R Vector field f : Ω R2 Types of derivatives Derivatives measure
More informationGeneralised Summation-by-Parts Operators and Variable Coefficients
Institute Computational Mathematics Generalised Summation-by-Parts Operators and Variable Coefficients arxiv:1705.10541v [math.na] 16 Feb 018 Hendrik Ranocha 14th November 017 High-order methods for conservation
More informationCaltech Ph106 Fall 2001
Caltech h106 Fall 2001 ath for physicists: differential forms Disclaimer: this is a first draft, so a few signs might be off. 1 Basic properties Differential forms come up in various parts of theoretical
More informationSymmetry Methods for Differential and Difference Equations. Peter Hydon
Lecture 2: How to find Lie symmetries Symmetry Methods for Differential and Difference Equations Peter Hydon University of Kent Outline 1 Reduction of order for ODEs and O Es 2 The infinitesimal generator
More information1 Exercise: Linear, incompressible Stokes flow with FE
Figure 1: Pressure and velocity solution for a sinking, fluid slab impinging on viscosity contrast problem. 1 Exercise: Linear, incompressible Stokes flow with FE Reading Hughes (2000), sec. 4.2-4.4 Dabrowski
More informationSpectral Element Methods (SEM) May 12, D Wave Equations (Strong and Weak form)
Spectral Element Methods (SEM) May 12, 2009 1 3-D Wave Equations (Strong and Weak form) Strong form (PDEs): ρ 2 t s = T + F in (1) Multiply a global test function W(x) to both sides and integrate over
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationEfficient hp-finite elements
Efficient hp-finite elements Ammon Washburn July 31, 2015 Abstract Ways to make an hp-finite element method efficient are presented. Standard FEMs and hp-fems with their various strengths and weaknesses
More informationTensors - Lecture 4. cos(β) sin(β) sin(β) cos(β) 0
1 Introduction Tensors - Lecture 4 The concept of a tensor is derived from considering the properties of a function under a transformation of the corrdinate system. As previously discussed, such transformations
More informationNumerical Methods for Inverse Kinematics
Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles.
More informationPhysics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory
Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three
More informationA Hybrid Method for the Wave Equation. beilina
A Hybrid Method for the Wave Equation http://www.math.unibas.ch/ beilina 1 The mathematical model The model problem is the wave equation 2 u t 2 = (a 2 u) + f, x Ω R 3, t > 0, (1) u(x, 0) = 0, x Ω, (2)
More informationContravariant and Covariant as Transforms
Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to
More informationWeek 1, solution to exercise 2
Week 1, solution to exercise 2 I. THE ACTION FOR CLASSICAL ELECTRODYNAMICS A. Maxwell s equations in relativistic form Maxwell s equations in vacuum and in natural units (c = 1) are, E=ρ, B t E=j (inhomogeneous),
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationLECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,
LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical
More informationA High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere
A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere Michael Levy University of Colorado at Boulder Department of Applied Mathematics August 10, 2007 Outline 1 Background
More informationImplementation of the dg method
Implementation of the dg method Matteo Cicuttin CERMICS - École Nationale des Ponts et Chaussées Marne-la-Vallée March 6, 2017 Outline Model problem, fix notation Representing polynomials Computing integrals
More informationReview I: Interpolation
Review I: Interpolation Varun Shankar January, 206 Introduction In this document, we review interpolation by polynomials. Unlike many reviews, we will not stop there: we will discuss how to differentiate
More informationA note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations
A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for
More informationIntroduction and Vectors Lecture 1
1 Introduction Introduction and Vectors Lecture 1 This is a course on classical Electromagnetism. It is the foundation for more advanced courses in modern physics. All physics of the modern era, from quantum
More informationHomework 4 in 5C1212; Part A: Incompressible Navier- Stokes, Finite Volume Methods
Homework 4 in 5C11; Part A: Incompressible Navier- Stokes, Finite Volume Methods Consider the incompressible Navier Stokes in two dimensions u x + v y = 0 u t + (u ) x + (uv) y + p x = 1 Re u + f (1) v
More informationRobotics I. February 6, 2014
Robotics I February 6, 214 Exercise 1 A pan-tilt 1 camera sensor, such as the commercial webcams in Fig. 1, is mounted on the fixed base of a robot manipulator and is used for pointing at a (point-wise)
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationINTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.
INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x
More informationComputational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur
Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur Lecture No. #12 Fundamentals of Discretization: Finite Volume Method
More informationSpline Element Method for Partial Differential Equations
for Partial Differential Equations Department of Mathematical Sciences Northern Illinois University 2009 Multivariate Splines Summer School, Summer 2009 Outline 1 Why multivariate splines for PDEs? Motivation
More informationChapter 2 Algorithms for Periodic Functions
Chapter 2 Algorithms for Periodic Functions In this chapter we show how to compute the Discrete Fourier Transform using a Fast Fourier Transform (FFT) algorithm, including not-so special case situations
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationVectors. January 13, 2013
Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,
More informationTutorial on Principal Component Analysis
Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms
More informationMultistage Methods I: Runge-Kutta Methods
Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline
More informationQuantum Field Theory Notes. Ryan D. Reece
Quantum Field Theory Notes Ryan D. Reece November 27, 2007 Chapter 1 Preliminaries 1.1 Overview of Special Relativity 1.1.1 Lorentz Boosts Searches in the later part 19th century for the coordinate transformation
More informationKey words. PDE-constrained optimization, deferred correction, time-dependent PDE, coupled system
A RATIONAL DEFERRED CORRECTION APPROACH TO PDE-CONSTRAINED OPTIMIZATION STEFAN GÜTTEL AND JOHN W PEARSON Abstract The accurate and efficient solution of time-dependent PDE-constrained optimization problems
More informationChapter 3 Stress, Strain, Virtual Power and Conservation Principles
Chapter 3 Stress, Strain, irtual Power and Conservation Principles 1 Introduction Stress and strain are key concepts in the analytical characterization of the mechanical state of a solid body. While stress
More informationOpen boundary conditions in numerical simulations of unsteady incompressible flow
Open boundary conditions in numerical simulations of unsteady incompressible flow M. P. Kirkpatrick S. W. Armfield Abstract In numerical simulations of unsteady incompressible flow, mass conservation can
More informationMath 5311 Constrained Optimization Notes
ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality
More informationa(b + c) = ab + ac a, b, c k
Lecture 2. The Categories of Vector Spaces PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 2. We discuss the categories of vector spaces and linear maps. Since vector spaces are always
More information1 Fundamentals. 1.1 Overview. 1.2 Units: Physics 704 Spring 2018
Physics 704 Spring 2018 1 Fundamentals 1.1 Overview The objective of this course is: to determine and fields in various physical systems and the forces and/or torques resulting from them. The domain of
More informationAN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS
AN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS HENRIK BRANDÉN, AND PER SUNDQVIST Abstract We propose an FFT-based algorithm for computing fundamental solutions of difference operators
More information