Effective Filtering and Interpolation of 2D Discrete Velocity Fields with Navier-Stokes Equations

Size: px
Start display at page:

Download "Effective Filtering and Interpolation of 2D Discrete Velocity Fields with Navier-Stokes Equations"

Transcription

1 Effective Filtering and Interpolation of 2D Discrete Velocity Fields with Navier-Stokes Equations Louis-Philippe Saumier, Boualem Khouider and Martial Agueh August 2, 2016 Abstract. We introduce a new variational technique to interpolate and filter a two-dimensional velocity vector field which is discretely-sampled in a region of R 2 and sampled only once in time, on a small time-interval [0, t]. The main idea is to find a solution of the Navier-Stokes equations that is closest to a prescribed field in the sense that it minimizes the l 2 norm of the difference between this solution and the target field. The minimization is performed on the initial vorticity by expanding it into radial basis functions of Gaussian type, with a fixed size expressed by a parameter ɛ. In addition, a penalty term with parameter k e is added to the minimizing functional in order to select a solution with a small kinetic energy. This additional term makes the minimizing functional strongly convex, and therefore ensures that the minimization problem is well-posed. The interplay between the parameters k e and ɛ effectively contributes to smoothing the discrete velocity field, as demonstrated by the numerical experiments on synthetic and real data. 1 Introduction There are many instances where one needs to filter and interpolate a discretely sampled velocity vector field. For example, in the context of Particle Image Velocimetry (PIV), tracer particles are seeded in a fluid and are Department of Mathematics and Statistics, University of Victoria (lsaumier@uvic.ca). ibid. (khouider@uvic.ca). ibid. (agueh@uvic.ca). 1

2 illuminated with a pulsing laser [13, 21]. A velocity vector field is then created from a sequence of images capturing the light scattered back by the particles to approximate the fluid flow. The field obtained is typically defined on a non-uniform grid given either by the interrogation windows for cross-correlation algorithms or by the particle locations in particle tracking algorithms [15, 16, 22]. In addition, the approximation given is usually stationary on the time interval [0, t] between two successive images. It may also contain corrupted vectors which need to be corrected. Many methods have been developed to interpolate discretely sampled velocity fields in the context of fluid flows. While direct approaches such as splines [18] or triangulations [17] may be used, velocity estimates can also be obtained from the solution of various inverse problems for parameters of the flow [6]. Given that in many cases the discrete field under study is likely to contain noisy data, one may formulate the inverse problem more specifically as a data assimilation problem. Data assimilation - the process by which observations of a system are incorporated into a model of the system - is typically employed in numerical weather forecasting, hydrology or geology. It can be viewed as a specific type of inverse problem [2, 5] and it has been used for example to reconstruct the circulation of the ocean from the positions of drifting floats [1, 11]. Out of the various data assimilation methods developed for fluid flows [2, 8], variational approaches have been proven successful to perform velocity fields extensions and filtering by using the Navier-Stokes equations [7, 12, 14, 20]. While some of these methods have been developed to handle fields sampled at multiple points in time (for example this allows to deal with entire sequences of PIV images [12, 14]), we focus here on fields which are given at only one point in time. The method we present could thus be used on its own, or for example to initialize more involved algorithms taking into account fields given at multiple points in time. In addition, as opposed to [12, 14], we do not assume that any other information is available (for example, additional information from the PIV images). We rely solely on the discrete vector field in order to make the technique applicable to a potentially wider range of problems. In [20], the authors have considered a similar situation where a corrupted velocity field is available only at one point in time. However, they assumed the flow was quasi-stationary and thus neglected the time derivatives. Our goal in this paper is to design a method which allows non-stationary flows and which also performs well when only a few vectors are available, i.e. 2

3 when the non-uniform grid on which the discrete field is defined is sparse. The new technique we propose finds the closest (in a certain way) Navier-Stokes solution to a two-dimensional velocity field which is discretely sampled in a region R 2 and sampled only once in time, on a small time interval [0, t]. More specifically, we use the vorticity form of the Navier-Stokes equations combined with a variational formulation of the error between the discretely sampled field and the target field. The functional corresponding to this error is minimized with respect to the initial vorticity ω 0 in the Navier-Stokes equations. However, on its own, this functional is convex but not strictly convex with respect to ω 0 and thus the minimization problem may not be well-posed. We therefore add a penalty term consisting of the kinetic energy of the solution, with associated penalty parameter k e. This penalty term has two important features. First, it makes the functional strongly convex on a time-interval [0, t] for t small enough. Second, it gives a bias towards a minimizer with a smaller kinetic energy, thus preventing unwanted small scale vortices to form when sparse grids are used. We use an approach similar to the vortex blob method [9] which consists in expanding the initial vorticity ω 0 into radial basis functions. Given the discrete sampling of velocity at some time t [0, t], the inverse problem considered is thus to recover a good approximation of the initial vorticity in terms of this expansion. For these basis functions, we use Gaussian blobs of fixed width given by another parameter ɛ, and solve the corresponding optimization problem with Fourier transforms and a quasi-newton method. The interplay between the parameters k e and ɛ contributes to filtering the vector field from noisy measurements. Numerical experiments demonstrate that with the appropriate combination of parameters k e and ɛ, one can obtain minimizers which are good approximations of the underlying target velocity field. This paper is organized as follows. Section 2 introduces the optimization problem. In section 3, we show why the associated functional is strongly convex, and present a linearization procedure. Then, the numerical algorithm we employ to solve the problem is given in section 4 and the results of numerical experiments are displayed in section 5. Finally, we give concluding remarks in section 6. 3

4 2 The Optimization Problem Let v D be a discrete velocity field defined on the set of discrete points {q 1, q 2,..., q Nq }, where for simplicity we select = [0, 1] 2. Ideally, the resulting field v E (x, t) obtained by extending v D to the entire physical domain and on the time-interval [0, t] would be a solution of the incompressible Navier-Stokes equation on : Dv = p + ν v, Dt div(v) = 0, (1) v(x, 0) = v 0 (x), x, t [0, t], where D/Dt := / t + (v ) is the material derivative, p is the fluid s pressure, ν the viscosity and v 0 (x) is the initial velocity field which is unknown. For the sake of simplicity, we will assume this system is equipped with periodic boundary conditions in our numerical experiments in sections 4 and 5. However, the theory presented in this section and the next one remains valid for any other boundary condition that make the problem in (1) well-posed. There might be many solutions of (1) that coincide (at some given time t in [0, t]) with the prescribed vectors v D (q i ), i = 1,..., N q. This is especially likely to be the case when the non-uniform grid given by these locations is sparse; the diameter of the associated set of Navier-Stokes solutions can be very large. One might therefore want to add a constraint to select a solution v E which does not add too many artificial features to the field. A natural choice is to pick a solution with a small kinetic energy t v(x, t) 2 dx dt. (2) 0 Also, given that v D may possibly contain erroneous vectors, it might be too stringent of a condition to perfectly match v E and v D at some time(s) t on [0, t]. We thus choose to minimize the total error N q i=1 v (q i, t ) v D (q i ) 2. (3) To summarize, we are looking for a spatial and temporal extension v E (x, t) of v D on [0, t] which solves (1) and minimizes the sum of (2) and (3). 4

5 Let us therefore build an optimization problem to find such a vector field v E. First, we point out that it is sufficient to minimize the initial kinetic energy of v E since the kinetic energy (2) of the solution to the Navier-Stokes equations (1) is time-decreasing [10]. Also, one common and easier way to solve the Navier-Stokes equations in 2D is to consider the associated vorticity equation Dω Dt = ν ω, (4) ω(x, 0) = ω 0 (x), x, t [0, t], where D/Dt := / t + (v ) is still the material derivative and ω = v is the vorticity of the field [10]. Again, we do not yet specify boundary conditions for, but we assume that the conditions selected make (4) wellposed. The velocity v is then determined from the vorticity, using the Biot- Savart law [10] v(x, t) = K 2 (x y)ω(y, t) dy, where K 2 is the 2D Biot-Savart Kernel on, and the pressure is determined by the Poisson equation p = tr( v) 2 = 1 i,j 2 v i x j v j x i. If we let ψ be the stream function for the fluid, then the Biot-Savart law is derived from the combination of the relationship between the velocity and stream function ( v = ψ = ψ, ψ ) x 2 x 1 and the relationship between the vorticity and the stream function ω = ψ. For simplicity, we will therefore sometimes denote v = 1 ω in the next sections. Let us now introduce the following optimization problem: Minimization Problem: N q } inf {F t (w 0 ) := k e v 0 (x) 2 dx + v(q i, t ) v D (q i ) 2 : w 0 C 2 (), i=1 (5) 5

6 where v 0 and v depend on ω 0 through v = 1 ω and (4), and t > 0, k e > 0 are constants. Essentially, the goal here is to find, among all suitable initial vorticities in, the one that will give the velocity field v E which minimizes F t. The space C 2 () is selected here so that (4) is defined in the classical sense. To obtain a numerical solution of (5), we use an expansion of the initial vorticity in radial basis functions N b e x b i 2 /ɛ 2 ω 0 (x) = α i (6) ɛ 2 i=1 akin to the one used in the context of vortex blob methods [9]. Here, b i is the center of blob i (b i b j for i j), N b is the number of blobs and ɛ > 0 is a parameter controlling the size of the blobs. With this expansion, (5) becomes a minimization problem in R N b for the weights αi, i = 1,..., R N b. We will present a proof of the strong convexity of F t (α) := F t (ω 0 ) with respect to the weights α = (α 1,..., α Nb ) for small t and for k e > 0 in the next section, but for now we assume that F t (α) has a unique minimizer in R N b. Besides making the functional F t strongly convex for small t, the parameter k e allows flexibility in this solution by controlling the size of the kinetic energy with respect to the error associated with the prescribed field. Indeed, a solution corresponding to a small k e should be very faithful to v D but may have too much kinetic energy and thus may present too many turbulent eddies. On the other hand, a solution associated with a larger k e will have less kinetic energy, at the expense of allowing v E to be further from v D at the particle locations. If the time t [0, t] at which we want the vector fields to be close to each other is unknown, or if the discrete vector field v D is a stationary approximation of the real physical velocity on [0, t], we can select t = t/2 for numerical experiments. One can thus think of (3) as a midpoint approximation of the time-integral of this error term over the interval [0, t]. Also, by considering the minimization of the total error (3), one allows the optimal solution to not go through every single v D (q i ) and thus potentially smooth out some of the noise. 6

7 3 Convexity and Linearization Let us now comment on the convexity of F t with respect to α. To ease notations, we introduce the two functionals: F 1 (ω 0 ) := v 0 (x) 2 dx = 1 ω 0 (x) 2 dx, F t 2 (ω 0 ) := N q N q v (q i, t ) v D (q i ) 2 = 1 ω (q i, t ) v D (q i ) 2, i=1 and we define F 1 (α) := F 1 (ω 0 ) and F t 2 (α) := F t 2 (ω 0 ) for ω 0 given by (6). Recall the definition of a strongly convex function: Definition 1. A function F (z) : R n R is said to be strongly convex if there is a constant λ > 0 such that the function g(z) = F (z) λ z 2 is convex on R n. In that case we call λ the modulus of strong convexity of F. We have the following theorem for the strong convexity of F t : Theorem 2. For k e > 0, and for a small enough t > 0, F t (α) is a strongly convex function on R N b. The modulus of strong convexity of F t, denoted λ F t, is bounded above by k e λ F1 where λ F1 is the modulus of strong convexity of F 1. Before proving this theorem, we state and prove the following lemma. Lemma 3. F 1 (α) is a strongly convex function on R N b. In addition, the modulus of strong convexity λ F1 of F 1 depends only on the blob s initial positions b i, i = 1,..., N b. Proof. For clarity and to avoid tedious computations, we will present the proof only for N b = 2, that is for α = (α 1, α 2 ) R 2. We will discuss how to extend the results to the general case in a remark after the proof. Let ω 0 (x) = α 1 e x b 1 2 /ɛ 2 for b 1 b 2. We have v 0 (x) = K 2 (x y)w 0 (y) dy ɛ 2 i=1 = α 1 K 2 (x y) e y b 1 2 /ɛ 2 dy + α ɛ 2 2 := α 1 v 01 (x) + α 2 v 02 (x) + α 2 e x b2 2/ɛ2 ɛ 2 (7) K 2 (x y) e y b 2 2 /ɛ 2 dy ɛ 2 7

8 where in the last line we defined v 01 (x) and v 02 (x) for convenience. Using this in F 1 gives F 1 (α) = v 0 (x) 2 dx = α 1 v 01 (x) + α 2 v 02 (x) 2 dx = α1 2 v 01 (x) 2 dx + 2α 1 α 2 v 01 (x), v 02 (x) dx + α2 2 v 02 (x) 2 dx, where, denotes the usual l 2 inner product in R 2. Let us now take g(α) := F 1 (α) λ α 2. We get ( ) g = [2α 1 v 01 2 dx λ + 2α 2 v 01, v 02 dx, ( ) ] 2α 2 v 02 2 dx λ + 2α 1 v 01, v 02 dx and then D 2 g = ( ) 2 v 01 2 dx λ 2 v 01, v 02 dx 2 The first leading principal minor of the Hessian is ( ) 2 v 01 2 dx λ 2 v 01, v 02 dx ( v 02 2 dx λ and it is positive for λ < v 01 2 dx. The second leading principal minor is ( ) ( ) 4 v 01 2 dx λ v 02 2 dx λ [ ( = 4 v 01 2 dx v 02 2 dx ( 4 v 01, v 02 dx [ 4λ ( v v 02 2 ) dx ). v 01, v 02 dx ) ] 2 ) 2 ] + 4λ 2. (8) 8

9 Using the Cauchy-Schwartz inequality, we have ( 2 ( v 01, v 02 dx) v 01, v 02 dx ( < v 01 2 dx ) 2 ) 2 v 01 v 02 dx v 02 2 dx where we get a strict inequality since v 01 is not proportional to v 02 for b 1 b 2. We thus set ( 2 v 01 2 dx v 02 2 dx v 01, v 02 dx) = m where m > 0 is a constant independent of λ. We can write (8) as ( ) 4λ 2 4λ ( v v 02 2 ) dx + 4m, (9) which is a quadratic equation in λ. The discriminant is ( 2 ( = 16 v 01 2 dx v 02 dx) v 01, v 02 dx and it is positive for b 1 b 2. This implies the equation has the two (positive) roots λ 1 = 1 [ ( ) 4 ( v v 02 2 ) dx ], 8 λ 2 = 1 [ ( ) 4 ( v v 02 2 ) dx + ]. 8 Therefore, taking λ < λ 1 gives that (9) is positive and we get that the second leading minor is positive. This means that the Hessian D 2 g is positive definite for λ < min{ v 01 2 dx, λ 1 }. In addition, as the ordering of v 01 and v 02 is arbitrary, we have the same inequality for v 02 as for v 01. We thus get that g is a convex function for { λ < min v 01 2 dx, ) 2 v 02 2 dx, λ 1 }. (10) We can conclude that F 1 (α) is a strongly convex function with modulus depending only on b 1, b 2 and satisfying (10). 9

10 Remark 4. To extend this proof to higher dimensions, one can recognize the Hessian D 2 F 1 as the integral of a Gram matrix. Gram matrices are known to be positive definite when the corresponding set of vectors is linearly independent (which is the case here for distinct b i due to the specific basis functions used). The Hessian D 2 F 1 can thus be shown to be positive definite, which in turn implies that all of its principal leading minors are strictly positive. A minor of the Hessian D 2 g can be written as the sum of a minor of D 2 F 1 plus an O(λ) term, therefore positive for small λ. However, the explicit calculations of the minors become cumbersome in higher dimensions and that is why we only presented the proof in R 2. We now prove Theorem 2. Proof. First, we point out that F2 0 is a convex function of α. This can directly be seen by computing the Hessian matrix of F2 0 in a similar way as in the proof of the previous lemma. Next, from the regularity theory of the 2D Navier-Stokes equations (4) [10, 19], we know that for ω 0 given by (6), the spatial Hessian D 2 F2 t (α) is continuous with respect to t at t = 0. Therefore, we have that for γ > 0 (to be chosen later), there exists δ > 0 such that if t < δ, for any ξ R 2, ξ 0. This gives which in turn yields ξ T D 2 F t 2 (α)ξ ξ T D 2 F 0 2 (α)ξ < γ ξ 2 ξ T D 2 F t 2 (α)ξ > ξ T D 2 F 0 2 (α)ξ γ ξ 2 ξ T D 2 F t (α)ξ = k e ξ T D 2 F 1 (α)ξ + ξ T D 2 F t 2 (α)ξ > k e ξ T D 2 F 1 (α)ξ + ξ T D 2 F 0 2 (α)ξ γ ξ 2 k e λ F1 ξ 2 + ξ T D 2 F 0 2 (α)ξ γ ξ 2 (k e λ F1 γ) ξ 2, where in the last line we used the fact that F2 0 is a convex function. We see that taking γ < k e λ F1 where λ F1 is the modulus of strong convexity of F 1 is enough to guarantee the strong convexity of F t. We also conclude that the modulus of strong convexity of F t, that is, λ F t = k e λ F1 γ < k e λ F1, is bounded above by k e λ F1 as desired. 10

11 Remark 5. At the end of the proof of Theorem 2, we saw that γ < k e λ was needed in order for F t to be strongly convex. If the value of k e selected is small, then a small value of δ (and thus t ) may be required to keep γ smaller than k e λ. Therefore, increasing the value of k e may allow for larger values of t to be selected without losing the strong convexity of F t (more weight is being put on the strongly convex function F 1 in the sum F t = k e F 1 + F2 t ). The addition of k e F 1 to F2 t can also be seen as adding a penalty term to the error functional F2 t in the context of penalty methods, ensuring solvability of the optimization problem at the expense of allowing less kinetic energy in the final solution. Next, in order to obtain the numerical solution of this optimization problem within reasonable time, we will employ a steepest descent algorithm. We therefore need to linearize the functional F t. We will compute the linearization with respect to a general ω 0 and then we can use it for the problem in R N b by specializing it to (6). Consider a small variation ω0 + ηh of ω 0. For F 1, we have F 1 [ω 0 + ηh] = 1 (ω 0 + ηh) 2 dx = 1 ω 0 2 dx + 2η 1 ω 0, 1 h dx + O(η 2 ) where, denotes the l 2 inner product in R 2. From this, we obtain the linearization of F 1 at ω 0 in direction h: DF 1 (ω 0 )(h) = 2 1 ω 0, 1 h dx. (11) Let us denote the solution of (4) by ω(t) = S t ω 0 where S t is the flow operator associated with the solution of (4). For F t 2, we have F t 2 [ω 0 + ηh] = N q i=1 1 S t (ω 0 + ηh)(q i ) v D (q i ) 2 N q (ω 1 0 (q i ) + ηh(q i ) + t ) t S t(ω 0 + ηh)(q i ) v D (q i ) t=0 i=1 where in the second line we used a first order in time Taylor expansion to approximate S t. Note that this approximation is reasonable since the O(t 2 ) 11 2,

12 term missing is small in our case due to the small t. Using (4) to replace the time derivative, we reach N q i=1 1 ω 0 (q i ) + η 1 h(q i ) ) + t (ν (ω ηh) ( 1 (ω 0 + ηh) )(ω 0 + ηh) (q i ) v D (q i ) 2 N q = 1 ω 0 (q i ) + t 1 ν ω 0 (q i ) i=1 t 1 ( 1 ω 0 (q i ) ω 0 (q i )) v D (q i ) ( + η 1 h(q i ) + t ν 1 h(q i ) t 1 ( 1 h(q i ) ω 0 (q i )) ) t 1 ( 1 ω 0 (q i ) h(q i )) + O(η 2 ) 2. Expanding the norm as an inner product and gathering the linear terms in η yields the following formula: DF2 t (ω 0 )(h) = N q ( ) 2 1 ω 0 (q i ) + t ν ω 0 (q i ) t 1 ω 0 (q i ) ω 0 (q i ) v D (q i ), i=1 Thus, we obtain (h(q 1 i ) + t ν h(q i ) t 1 h(q i ) ω 0 (q i ) ) t 1 ω 0 (q i ) h(q i ). (12) DF t (ω 0 )(h) = k e DF 1 (ω 0 )(h) + DF t 2 (ω 0 )(h), (13) where DF 1 (ω 0 )(h) and DF t 2 (ω 0 )(h) are defined by (11) and (12), respectively. 4 Numerical Algorithm As opposed to using a Lagrangian discretization of (4) which is typical of vortex methods, we impose instead periodic boundary conditions on and 12

13 use Fourier transforms with a finite differences scheme to solve the resulting first-order in time ODE in Fourier space. When the domain is the twodimensional torus T 2, the vorticity formulation of (1) becomes ω t + [(v 0 + ṽ) ]ω = ν ω, ω(x, 0) = ω 0 (x), x, t [0, t] (14) where ω = curl ṽ, v 0 = v 0 dx and ṽ is recovered from the periodic version of the Biot-Savart law ṽ(x, t) = k 0 ( k 2, k 1 ) t 2πi k 2 e 2πix k ω(k, t) (15) and ω is the Fourier transform of the vorticity ω [10]. Let us now give the specific details of the discretization employed. First, we embed in a larger numerical domain n to avoid boundary effects, and we impose the periodic boundary conditions on n instead of. We lay a uniform grid of the centers b i for the Gaussian blobs in. We then consider (5) as an unconstrained minimization problem in R Nb and use MATLAB s fminunc quasi-newton algorithm to find the minimum of F t with respect to the weights α i in (6). This algorithm uses the gradient information supplied by (13) as well as an approximation of the Hessian of F t with the BFGS method to iterate in the direction of steepest descent [3]. At each step, given a set of weights {α i }, ω 0 is reconstructed on a uniform numerical grid on n of size N = N 1 N 2 through (6). Then, v 0 is recovered with the periodic Biot- Savart law (15) using the FFT algorithm (all the computations of 1 required in the derivative of F t are done this way). Once v 0 is obtained, the periodic Navier-Stokes equations in vorticity form (14) are solved in Fourier space with the FFT and the trapezoidal method for the resulting ODE to get ω and v on the time-domain [0, t], which is itself discretized with N t points. In the last step, F2 t is computed at t = t/2, all the integrals are approximated by a 2D Simpon s method, and F t as well as its derivative are computed with the same techniques. Finally, we use the stopping criterion provided by the fminunc package, which stops the iterations when either the size of the objective function F t or the size of a step becomes smaller than the specified tolerance TOL fminunc = Also, as we are looking for an optimal solution with a minimum kinetic energy, it is natural to take ω 0 = 0 as an initial guess in the quasi-newton algorithm. 13

14 (a) Target Vector Field. (b) Random selection of 100 vectors in a). Figure 1: Fields for Test 1. 5 Numerical Experiments We now conduct several numerical experiments to analyze the behavior of the minimizer in (5). In all the following tests, we run the algorithm using a spatial grid size of N = N 1 N 2 = and a temporal grid size of N t = 100, using a fixed time-step t = 0.1. The viscosity is set to ν = m 2 /s, which is roughly the (kinematic) viscosity of distilled water at 20 degrees Celsius. We will vary the parameters (k e, ɛ, and N b ) to study their effect on the resulting fields. The computations were all performed on a personal laptop computer of type Intel(R) Core(TM) i7-3537u 2.00GHz with a parallel implementation using all 4 cores to compute the components of the gradient of F t at every step. The computing times for a typical experiment presented in this section (when N b = 64) are around 15 minutes. As most of the work goes into computing the gradient at every step and that each component of the gradient is independent of the others, using more than 4 cores would greatly improve the performances of the current implementation. Also, we point out that our code has not been professionally optimized to minimize the computing time. Finally, in what follows, the images displayed all correspond to the optimal vector fields v E at time t = t/2 and the various norms are computed with the same fields. Test 1 For the first numerical experiment, we consider the vector field, on the 14

15 F F 1 F Number of Iterations Figure 2: Iterations details for Test 1 with k e = 0.1, ɛ = 0.05 and N b = 64. square [0, 1] 2, given by ) ( v(x 1, x 2 ) = (x 2 0.5, 0.5 x 1 exp ) 2((x 1 0.5) 2 + (x 2 0.5) 2 ). (16) This vector field cannot be recovered exactly with a single term in the expansion (6) since its curl is given by ( ) ( ) v = 4(x 1 0.5) 2 +4(x 2 0.5) 2 2 exp 2((x 1 0.5) 2 + (x 2 0.5) 2 ). In addition, even though it is divergence free, this field is not a solution of (14) due to the small viscosity term. We randomly select 100 vectors in (16) in order to simulate a vector field on a non-uniform grid. The resulting field is displayed in Figure 1. Figure 2 then displays the details of the iterations when the model parameters are set to k e = 0.1, ɛ = 0.05 and N b = 64. We observe that for the first few iterations, both kinetic energy F 1 and error with prescribed field F2 t are decreasing. After that, the algorithm slightly increases F 1 while decreasing F2 t. This behavior is fairly typical for the algorithm, especially for small values of k e. 15

16 k e F F2 t lnorm 2 error e rel error l error Table 1: Results for Test 1 when ɛ = 0.05 and N b = 64. The errors presented are computed for every grid point in. Let us now analyze the effect of the parameter k e. Figure 3 and Table 1 present the results of the algorithm when ɛ and N b are fixed to ɛ = 0.05 and N b = 64 but k e is varied. We see from Figure 3 that for small values of k e, the velocity field obtained displays artificial features (at the bottom right of the target vortex) which disappear for k e = 10. Indeed, the algorithm allows more kinetic energy for small k e in order to decrease F2 t as much as possible. We also observe that it is not desirable to select k e too large since the optimal solution becomes very close to 0, which corresponds to a field with almost no kinetic energy. These claims are quantified with the norms given in Table 1. The first two rows of this table give the values of F 1 and F2 t obtained for different values of k e. The last three rows give the error between the target field (16) and the field obtained by the algorithm at time t = t/2 with the lnorm 2 and the l norms. Note that lnorm 2 is the usual normalized l 2 norm and e rel is the relative error with respect to that norm. As k e varies from 1000 to 0, we see that the kinetic energy F 1 steadily increases while the error with the prescribed field F2 t steadily decreases. The lnorm 2 error is best when k e = 10, and significantly increases when k e is increased or reduced. A similar behavior is displayed for the l error, which is best at k e = 1. These results confirm the usefulness of k e, not only to make the functional strongly convex, but also to prevent the formulation of artificial features (i.e. small scale vortices) for sparse grids. Note that some residual error is expected since v in (16) is not a solution of the Navier-Stokes equations (14). In addition, the solutions presented for k e = 0 may not be unique, but initiating ω 0 = 0 possibly helps creating a bias towards a field with a small kinetic energy. Now that we analyzed the effect of k e, let us fix k e = 1 and N b = 64 and vary ɛ. Figure 4 presents the different vector fields obtained for different values of ɛ. The behavior observed is similar to the behavior of k e : taking 16

17 (a) k e = 1000 (b) k e = 100 (c) k e = 10 (d) k e = 1 (e) k e = 0.1 (f) k e = 0 Figure 3: Results for Test 1 when ɛ = 0.05 and N b =

18 (a) ɛ = 1.0 (b) ɛ = 0.5 (c) ɛ = 0.2 (d) ɛ = 0.1 (e) ɛ = 0.05 (f) ɛ = 0.02 Figure 4: Results for Test 1 when k e = 1 and N b =

19 (a) N b = 4 (b) N b = 16 (c) N b = 64 (d) N b = 256 Figure 5: Results for Test 1 when k e = 0.1 and ɛ = 0.1. ɛ F F2 t lnorm 2 error e rel error l error Table 2: Results for Test 1 when k e = 1 and N b = 64. The errors presented are computed for every grid point in. ɛ too small or too large decreases the quality of the approximation. Indeed, by taking ɛ to be too large, the algorithm cannot fully recover the size of 19

20 N b F F2 t lnorm 2 error e rel error l error Table 3: Results for Test 1 when k e = 0.1 and ɛ = 0.1. The errors presented are computed for every grid point in. Errors for Figure 6 (b) Figure 6 (d) Figure 6 (f) lnorm 2 error e rel error l error Table 4: Results for Test 1 with noise for k e = 1, ɛ = 0.2 and N b = 64. the target vortex, and by taking ɛ too small, the algorithm allows smaller variations in the field which are not physical in this case. This observation from the images is confirmed with the numbers in Table 2: the best field obtained is when ɛ = 0.2 for the lnorm 2 error and ɛ = 0.1 for the l error. Both the l 2 norm and l norm increase when ɛ is increased or decreased. In addition, F 1 and F2 t also become larger as ɛ increases or decreases from its best values. This behavior is inherent to the vortex blob method; an optimal ɛ is expected to exist for any given grid resolution. Let us now fix k e = 0.1, ɛ = 0.1 and vary N b. The results for grids N b = 2 2, N b = 4 4, N b = 8 8 and N b = are presented in Figure 5 and Table 3. We see on the images that the quality of the approximation increases as N b gets larger, which is to be expected. The size of F2 t roughly decreases by a factor 10 for each increase in N b. The results are best when N b = 256, but this comes at the expense of a much longer computational time required to obtain the approximate field. Indeed, this time jumps from about 15 minutes for N b = 64 to about 6.5 hours for N b = 256 (still on the personal laptop of specifications described earlier). We also investigate the effect of noise added to the field. More specifically, we add one erroneous velocity vector to the field of Figure 1 (b) and investigate the resulting error on the recovered field. This situation is common when using particle tracking methods for PIV: a mismatch between particles 20

21 (a) Same vector field as in Figure 1 (b). (b) Results of algorithm applied to (a). (c) One erroneous vector added to (a) at position x 1 = 0.68, x 2 = (d) Results of algorithm applied to (c). (e) Same erroneous vector added to (a), but at position x 1 = 0.77, x 2 = (f) Results of algorithm applied to (e). Figure 6: Results of Test 1 with noise for k e = 1, ɛ = 0.2 and N b =

22 may give a vector which is clearly going against the flow. First, we consider the case where the erroneous vector is placed close to a group of (correct) vectors. Figure 6 (c) gives the input field with the erroneous vector added while (d) gives the result of the algorithm applied to (c). In addition, Table 4 shows the different errors between the result in Figure 6 (d) and the target in Figure 1 (a). We see that the l 2 norm is almost unchanged (1% increase) and that the l norm decreases significantly. If we change instead the location of the erroneous vector so that it is further from other vectors as in Figure 6 (e), we obtain the field in Figure 6 (f) with the algorithm. Table 4 also shows the errors associated with this field. We see that the l 2 norm error now increases by about 7%, but the l error still remains smaller than the one for the field without noise. The algorithm can therefore smooth out some of the noise added, unless the erroneous vectors are in a region more sparsely filled with trustworthy vectors, in which case it is not as effective. We now briefly discuss the importance of the time derivative of the recovered solution. As previously mentioned in the introduction, our method is designed to produce non-stationary velocity fields on the interval [0, t]. Even though all the Figures presented thus far in Section 5 display the recovered fields at time t = t/2, the velocity fields we obtain visibly vary from t = 0 to t = t. In fact, for the examples of Test 1, we typically observe that the l norm of v/ t is about 0.25 m/s (assuming units in meters per second to match the selected viscosity) and that this is achieved close to the center of the vortex. For a velocity vector of length 0.15 at the position where the maximum time derivative is achieved, this corresponds to a change of about 2.5% in each of the components of that vector on the time interval [0, 0.01] selected for the current experiment. This small variation might not seem like much, but is is enough to see the vortex move on [0, 0.01] just by looking at a plot of v. In addition, if the goal of the experiment is to obtain a very precise approximation of the physical velocity field (as it is the case in PIV), then a 2.5% error may not be deemed negligible. As a final comment with the selection of 100 vectors for Test 1, we mention that if number of vectors is increased from 100 to 500, the results significantly improve. For example, in the case where the parameters are ɛ = 0.1, k e = 0.1 and N b = 64, l 2 norm drops from to , e rel from to and l from to The solution recovered is thus closer to the target when more information is available, which is of course to be expected. 22

23 (a) Field obtained with Optimal Transport method for PIV given in [16]. ɛ = (b) Result of test 2 for N b = 256, k e = 0.1 and 3. (c) Result of test 2 for N b = 256, k e = 0.1 and ɛ = 0.8. (d) Result of test 2 for N b = 256, k e = 0.1 and ɛ = 0.3. Figure 7: Results of Test 2. Test 2 In this second test, we apply the procedure to a real example. We select a discrete field obtained in [16] by using the Optimal Transport algorithm for Particle Image Velocimetry on the first two images in a dataset given by [23]. This dataset consists of PIV images showing a slightly turbulent air flow seeded with small water droplets. The results are presented in Figure 7. We display the fields associated with three different values of ɛ to show an interesting feature of the algorithm. For ɛ = 3, we see that the general direction of the flow is recovered, but none of the finer structures are present. 23

24 For ɛ = 0.8, the flow recovered is fairly close to the one given in b) and c) and the finer variations are displayed. However, the flow given by ɛ = 0.3 gives too small features which do not seem to be accurate in the target flow. The field obtained for ɛ = 0.8 also effectively smooths the prescribed field and corrects some apparently erroneous vectors. Keep in mind that we do not have access to the exact solution as this dataset is taken from a real experiment, but we can nonetheless observe that varying ɛ may help to visualize different regimes of the target velocity field and provide different levels of smoothing. Also, even though a large value of N b was selected for Test 2, good results can also be obtained for N b = Conclusions We introduced a variational method to find the vector field defined on the whole spatial domain and time interval that is the closest to a discretely sampled vector field given at one point in time. The closest field is selected to be a solution of the Navier-Stokes equations with minimal kinetic energy. To obtain it, we assumed that the initial vorticity was given by a sum of Gaussian blobs of fixed width determined by a parameter ɛ. The minimal kinetic energy constraint was enforced as a penalty term on the error with the prescribed field, with corresponding penalty parameter k e. The interplay between the parameters k e and ɛ contributed to filtering the flow from erroneous approximated vectors. We found that larger values of ɛ effectively smoothed the prescribed field by limiting the presence of small-scale features. On the other hand, larger values of k e made the functional strongly convex with respect to the weights in the expansion of the initial vorticity and helped reducing the presence of artificial features for sparse grids when smaller values of ɛ are employed. The method presented only uses information from the discrete vector field and thus could be applied to a variety of situations. It would be interesting in the future to extend it in order to interpolate and filter three-dimensional vector fields as they are becoming more common in some applications, for example in 3D-PIV [4]. Acknowledgement The authors are supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a fellowship from the University of Victoria. 24

25 References [1] J. Blum, F.-X. Le Dimet, and I M. Navon. Data assimilation for geophysical fluids. Handbook of numerical analysis, 14: , [2] S.L. Cotter, M. Dashti, J.C. Robinson, and A.M. Stuart. Bayesian inverse problems for functions and applications to fluid mechanics. Inverse problems, 25(11):115008, [3] J.E. Jr. Dennis and J.J. Moré. Quasi-newton methods, motivation and theory. SIAM review, 19(1):46 89, [4] G.E. Elsinga, F. Scarano, B. Wieneke, and B.W. van Oudheusden. Tomographic particle image velocimetry. Experiments in Fluids, 41: , [5] M Freitag and R Potthast. Synergy of inverse problems and data assimilation techniques. Large Scale Inverse ProblemsComputational Methods and Applications In the Earth Sciences, 13, [6] J. Gregson. Applications of inverse problems in fluids and imaging [7] D. Heitz, E. Mémin, and C. Schnörr. Variational fluid flow measurements from image sequences: synopsis and perspectives. Experiments in fluids, 48(3): , [8] E. Kalnay. Atmospheric modeling, data assimilation, and predictability. Cambridge university press, [9] A. Leonard. Vortex methods for flow simulation. Journal of Computational Physics, 37(3): , [10] A. J. Majda and A. L. Bertozzi. Vorticity and incompressible flow, volume 27. Cambridge University Press, [11] M. Nodet. Variational assimilation of lagrangian data in oceanography. Inverse problems, 22(1):245, [12] N. Papadakis and É. Mémin. Variational assimilation of fluid motion from image sequence. SIAM Journal on Imaging Sciences, 1(4): ,

26 [13] M. Raffel, C.E. Willert, S.T. Wereley, and J. Kompenhans. Particle Image Velocimetry: A Practical Guide. Springer Berlin Heidelberg, [14] P. Ruhnau, A. Stahl, and C. Schnörr. Variational estimation of experimental fluid flows with physics-based spatio-temporal regularization. Measurement Science and Technology, 18(3):755, [15] L.-P. Saumier, B. Khouider, and M. Agueh. Optimal transport for particle image velocimetry. Communications in Mathematical Sciences, 13(1): , [16] L.-P. Saumier, B. Khouider, and M. Agueh. Optimal transport for particle image velocimetry: Real data and postprocessing algorithms. SIAM Journal on Applied Mathematics, 75(6): , [17] W.E. Schaap and R. van de Weygaert. Continuous fields and discrete samples: reconstruction through delaunay tessellations. Astronomy & astrophysics, 363(3):L29 L32, [18] G.R. Spedding and E.J.M. Rignot. Performance analysis and application of grid interpolation techniques for fluid flows. Experiments in Fluids, 15(6): , [19] R. Temam. Navier-Stokes equations and nonlinear functional analysis, volume 66. Siam, [20] A. Vlasenko and C. Schnörr. Physically consistent and efficient variational denoising of image fluid flow estimates. Image Processing, IEEE Transactions on, 19(3): , [21] J. Westerweel. Digital particle image velocimetry Theory and application. Ph.d. dissertation, Delft University, [22] J. Westerweel, D. Dabiri, and M. Gharib. The effect of a discrete window offset on the accuracy of cross-correlation analysis of digital piv recordings. Experiments in fluids, 23(1):20 28, [23] B. Wieneke LAVISION. Sequence of experimental time-resolved PIV images showing a slightly turbulent air flow with small water droplets with a diameter of about 5 micrometer. htm. 26

PIV Basics: Correlation

PIV Basics: Correlation PIV Basics: Correlation Ken Kiger (UMD) SEDITRANS summer school on Measurement techniques for turbulent open-channel flows Lisbon, Portugal 2015 With some slides contributed by Christian Poelma and Jerry

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Chapter 2. General concepts. 2.1 The Navier-Stokes equations

Chapter 2. General concepts. 2.1 The Navier-Stokes equations Chapter 2 General concepts 2.1 The Navier-Stokes equations The Navier-Stokes equations model the fluid mechanics. This set of differential equations describes the motion of a fluid. In the present work

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

CHAPTER 7 SEVERAL FORMS OF THE EQUATIONS OF MOTION

CHAPTER 7 SEVERAL FORMS OF THE EQUATIONS OF MOTION CHAPTER 7 SEVERAL FORMS OF THE EQUATIONS OF MOTION 7.1 THE NAVIER-STOKES EQUATIONS Under the assumption of a Newtonian stress-rate-of-strain constitutive equation and a linear, thermally conductive medium,

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Euler equation and Navier-Stokes equation

Euler equation and Navier-Stokes equation Euler equation and Navier-Stokes equation WeiHan Hsiao a a Department of Physics, The University of Chicago E-mail: weihanhsiao@uchicago.edu ABSTRACT: This is the note prepared for the Kadanoff center

More information

A PIV Algorithm for Estimating Time-Averaged Velocity Fields

A PIV Algorithm for Estimating Time-Averaged Velocity Fields Carl D. Meinhart Department of Mechanical & Environmental Engineering, University of California, Santa Barbara, CA 93106 e-mail: meinhart@engineering.vcsb.edu Steve T. Wereley Mechanical Engineering, Purdue

More information

NONSTANDARD NONCONFORMING APPROXIMATION OF THE STOKES PROBLEM, I: PERIODIC BOUNDARY CONDITIONS

NONSTANDARD NONCONFORMING APPROXIMATION OF THE STOKES PROBLEM, I: PERIODIC BOUNDARY CONDITIONS NONSTANDARD NONCONFORMING APPROXIMATION OF THE STOKES PROBLEM, I: PERIODIC BOUNDARY CONDITIONS J.-L. GUERMOND 1, Abstract. This paper analyzes a nonstandard form of the Stokes problem where the mass conservation

More information

Euler Equations: local existence

Euler Equations: local existence Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u

More information

GFD 2012 Lecture 1: Dynamics of Coherent Structures and their Impact on Transport and Predictability

GFD 2012 Lecture 1: Dynamics of Coherent Structures and their Impact on Transport and Predictability GFD 2012 Lecture 1: Dynamics of Coherent Structures and their Impact on Transport and Predictability Jeffrey B. Weiss; notes by Duncan Hewitt and Pedram Hassanzadeh June 18, 2012 1 Introduction 1.1 What

More information

arxiv: v1 [math.ap] 5 Nov 2018

arxiv: v1 [math.ap] 5 Nov 2018 STRONG CONTINUITY FOR THE 2D EULER EQUATIONS GIANLUCA CRIPPA, ELIZAVETA SEMENOVA, AND STEFANO SPIRITO arxiv:1811.01553v1 [math.ap] 5 Nov 2018 Abstract. We prove two results of strong continuity with respect

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Numerical methods for the Navier- Stokes equations

Numerical methods for the Navier- Stokes equations Numerical methods for the Navier- Stokes equations Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Dec 6, 2012 Note:

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Week 6 Notes, Math 865, Tanveer

Week 6 Notes, Math 865, Tanveer Week 6 Notes, Math 865, Tanveer. Energy Methods for Euler and Navier-Stokes Equation We will consider this week basic energy estimates. These are estimates on the L 2 spatial norms of the solution u(x,

More information

Math background. Physics. Simulation. Related phenomena. Frontiers in graphics. Rigid fluids

Math background. Physics. Simulation. Related phenomena. Frontiers in graphics. Rigid fluids Fluid dynamics Math background Physics Simulation Related phenomena Frontiers in graphics Rigid fluids Fields Domain Ω R2 Scalar field f :Ω R Vector field f : Ω R2 Types of derivatives Derivatives measure

More information

NOTE. Application of Contour Dynamics to Systems with Cylindrical Boundaries

NOTE. Application of Contour Dynamics to Systems with Cylindrical Boundaries JOURNAL OF COMPUTATIONAL PHYSICS 145, 462 468 (1998) ARTICLE NO. CP986024 NOTE Application of Contour Dynamics to Systems with Cylindrical Boundaries 1. INTRODUCTION Contour dynamics (CD) is a widely used

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Some considerations on the accuracy of derivative computation from PIV vector fields.

Some considerations on the accuracy of derivative computation from PIV vector fields. 4th International Symposium on Particle Image Velocimetry Göttingen, Germany, September 7-9, 2 PIV Paper 69 Some considerations on the accuracy of derivative computation from PIV vector fields. J.M. Foucaut

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Lecture 1: Introduction to Linear and Non-Linear Waves

Lecture 1: Introduction to Linear and Non-Linear Waves Lecture 1: Introduction to Linear and Non-Linear Waves Lecturer: Harvey Segur. Write-up: Michael Bates June 15, 2009 1 Introduction to Water Waves 1.1 Motivation and Basic Properties There are many types

More information

Topics in Fluid Dynamics: Classical physics and recent mathematics

Topics in Fluid Dynamics: Classical physics and recent mathematics Topics in Fluid Dynamics: Classical physics and recent mathematics Toan T. Nguyen 1,2 Penn State University Graduate Student Seminar @ PSU Jan 18th, 2018 1 Homepage: http://math.psu.edu/nguyen 2 Math blog:

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines

CS6375: Machine Learning Gautam Kunapuli. Support Vector Machines Gautam Kunapuli Example: Text Categorization Example: Develop a model to classify news stories into various categories based on their content. sports politics Use the bag-of-words representation for this

More information

DIRECTION OF VORTICITY AND A REFINED BLOW-UP CRITERION FOR THE NAVIER-STOKES EQUATIONS WITH FRACTIONAL LAPLACIAN

DIRECTION OF VORTICITY AND A REFINED BLOW-UP CRITERION FOR THE NAVIER-STOKES EQUATIONS WITH FRACTIONAL LAPLACIAN DIRECTION OF VORTICITY AND A REFINED BLOW-UP CRITERION FOR THE NAVIER-STOKES EQUATIONS WITH FRACTIONAL LAPLACIAN KENGO NAKAI Abstract. We give a refined blow-up criterion for solutions of the D Navier-

More information

On Some Variational Optimization Problems in Classical Fluids and Superfluids

On Some Variational Optimization Problems in Classical Fluids and Superfluids On Some Variational Optimization Problems in Classical Fluids and Superfluids Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada URL: http://www.math.mcmaster.ca/bprotas

More information

A scaling limit from Euler to Navier-Stokes equations with random perturbation

A scaling limit from Euler to Navier-Stokes equations with random perturbation A scaling limit from Euler to Navier-Stokes equations with random perturbation Franco Flandoli, Scuola Normale Superiore of Pisa Newton Institute, October 208 Newton Institute, October 208 / Subject of

More information

Fourth Order Convergence of Compact Finite Difference Solver for 2D Incompressible Flow

Fourth Order Convergence of Compact Finite Difference Solver for 2D Incompressible Flow Fourth Order Convergence of Compact Finite Difference Solver for D Incompressible Flow Cheng Wang 1 Institute for Scientific Computing and Applied Mathematics and Department of Mathematics, Indiana University,

More information

Pressure corrected SPH for fluid animation

Pressure corrected SPH for fluid animation Pressure corrected SPH for fluid animation Kai Bao, Hui Zhang, Lili Zheng and Enhua Wu Analyzed by Po-Ram Kim 2 March 2010 Abstract We present pressure scheme for the SPH for fluid animation In conventional

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

New Discretizations of Turbulent Flow Problems

New Discretizations of Turbulent Flow Problems New Discretizations of Turbulent Flow Problems Carolina Cardoso Manica and Songul Kaya Merdan Abstract A suitable discretization for the Zeroth Order Model in Large Eddy Simulation of turbulent flows is

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Point Vortex Dynamics in Two Dimensions

Point Vortex Dynamics in Two Dimensions Spring School on Fluid Mechanics and Geophysics of Environmental Hazards 9 April to May, 9 Point Vortex Dynamics in Two Dimensions Ruth Musgrave, Mostafa Moghaddami, Victor Avsarkisov, Ruoqian Wang, Wei

More information

Advanced computational methods X Selected Topics: SGD

Advanced computational methods X Selected Topics: SGD Advanced computational methods X071521-Selected Topics: SGD. In this lecture, we look at the stochastic gradient descent (SGD) method 1 An illustrating example The MNIST is a simple dataset of variety

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Generic a-posteriori uncertainty quantification for PIV vector fields by correlation statistics

Generic a-posteriori uncertainty quantification for PIV vector fields by correlation statistics Generic a-posteriori uncertainty quantification for PIV vector fields by correlation statistics Bernhard Wieneke LaVision GmbH, Göttingen, Germany bwieneke@lavision.de Abstract The uncertainty of a PIV

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

UNIVERSITY of LIMERICK

UNIVERSITY of LIMERICK UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH Faculty of Science and Engineering END OF SEMESTER ASSESSMENT PAPER MODULE CODE: MA4607 SEMESTER: Autumn 2012-13 MODULE TITLE: Introduction to Fluids DURATION OF

More information

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract Scale-Invariance of Support Vector Machines based on the Triangular Kernel François Fleuret Hichem Sahbi IMEDIA Research Group INRIA Domaine de Voluceau 78150 Le Chesnay, France Abstract This paper focuses

More information

CVS filtering to study turbulent mixing

CVS filtering to study turbulent mixing CVS filtering to study turbulent mixing Marie Farge, LMD-CNRS, ENS, Paris Kai Schneider, CMI, Université de Provence, Marseille Carsten Beta, LMD-CNRS, ENS, Paris Jori Ruppert-Felsot, LMD-CNRS, ENS, Paris

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Dissipative quasi-geostrophic equations with L p data

Dissipative quasi-geostrophic equations with L p data Electronic Journal of Differential Equations, Vol. (), No. 56, pp. 3. ISSN: 7-669. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu (login: ftp) Dissipative quasi-geostrophic

More information

General introduction to Hydrodynamic Instabilities

General introduction to Hydrodynamic Instabilities KTH ROYAL INSTITUTE OF TECHNOLOGY General introduction to Hydrodynamic Instabilities L. Brandt & J.-Ch. Loiseau KTH Mechanics, November 2015 Luca Brandt Professor at KTH Mechanics Email: luca@mech.kth.se

More information

Week 2 Notes, Math 865, Tanveer

Week 2 Notes, Math 865, Tanveer Week 2 Notes, Math 865, Tanveer 1. Incompressible constant density equations in different forms Recall we derived the Navier-Stokes equation for incompressible constant density, i.e. homogeneous flows:

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

1 Introduction. J.-L. GUERMOND and L. QUARTAPELLE 1 On incremental projection methods

1 Introduction. J.-L. GUERMOND and L. QUARTAPELLE 1 On incremental projection methods J.-L. GUERMOND and L. QUARTAPELLE 1 On incremental projection methods 1 Introduction Achieving high order time-accuracy in the approximation of the incompressible Navier Stokes equations by means of fractional-step

More information

Daniel J. Jacob, Models of Atmospheric Transport and Chemistry, 2007.

Daniel J. Jacob, Models of Atmospheric Transport and Chemistry, 2007. 1 0. CHEMICAL TRACER MODELS: AN INTRODUCTION Concentrations of chemicals in the atmosphere are affected by four general types of processes: transport, chemistry, emissions, and deposition. 3-D numerical

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998

Notes on Regularization and Robust Estimation Psych 267/CS 348D/EE 365 Prof. David J. Heeger September 15, 1998 Notes on Regularization and Robust Estimation Psych 67/CS 348D/EE 365 Prof. David J. Heeger September 5, 998 Regularization. Regularization is a class of techniques that have been widely used to solve

More information

THE POINCARÉ RECURRENCE PROBLEM OF INVISCID INCOMPRESSIBLE FLUIDS

THE POINCARÉ RECURRENCE PROBLEM OF INVISCID INCOMPRESSIBLE FLUIDS ASIAN J. MATH. c 2009 International Press Vol. 13, No. 1, pp. 007 014, March 2009 002 THE POINCARÉ RECURRENCE PROBLEM OF INVISCID INCOMPRESSIBLE FLUIDS Y. CHARLES LI Abstract. Nadirashvili presented a

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations

A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for

More information

Filtering the Navier-Stokes Equation

Filtering the Navier-Stokes Equation Filtering the Navier-Stokes Equation Andrew M Stuart1 1 Mathematics Institute and Centre for Scientific Computing University of Warwick Geometric Methods Brown, November 4th 11 Collaboration with C. Brett,

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Adaptive C1 Macroelements for Fourth Order and Divergence-Free Problems

Adaptive C1 Macroelements for Fourth Order and Divergence-Free Problems Adaptive C1 Macroelements for Fourth Order and Divergence-Free Problems Roy Stogner Computational Fluid Dynamics Lab Institute for Computational Engineering and Sciences University of Texas at Austin March

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

The Euler Equation of Gas-Dynamics

The Euler Equation of Gas-Dynamics The Euler Equation of Gas-Dynamics A. Mignone October 24, 217 In this lecture we study some properties of the Euler equations of gasdynamics, + (u) = ( ) u + u u + p = a p + u p + γp u = where, p and u

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

Remarks on the blow-up criterion of the 3D Euler equations

Remarks on the blow-up criterion of the 3D Euler equations Remarks on the blow-up criterion of the 3D Euler equations Dongho Chae Department of Mathematics Sungkyunkwan University Suwon 44-746, Korea e-mail : chae@skku.edu Abstract In this note we prove that the

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

The Shallow Water Equations

The Shallow Water Equations If you have not already done so, you are strongly encouraged to read the companion file on the non-divergent barotropic vorticity equation, before proceeding to this shallow water case. We do not repeat

More information

FREE BOUNDARY PROBLEMS IN FLUID MECHANICS

FREE BOUNDARY PROBLEMS IN FLUID MECHANICS FREE BOUNDARY PROBLEMS IN FLUID MECHANICS ANA MARIA SOANE AND ROUBEN ROSTAMIAN We consider a class of free boundary problems governed by the incompressible Navier-Stokes equations. Our objective is to

More information

Kinematic and dynamic pair collision statistics of sedimenting inertial particles relevant to warm rain initiation

Kinematic and dynamic pair collision statistics of sedimenting inertial particles relevant to warm rain initiation Kinematic and dynamic pair collision statistics of sedimenting inertial particles relevant to warm rain initiation Bogdan Rosa 1, Hossein Parishani 2, Orlando Ayala 2, Lian-Ping Wang 2 & Wojciech W. Grabowski

More information

A RECURRENCE THEOREM ON THE SOLUTIONS TO THE 2D EULER EQUATION

A RECURRENCE THEOREM ON THE SOLUTIONS TO THE 2D EULER EQUATION ASIAN J. MATH. c 2009 International Press Vol. 13, No. 1, pp. 001 006, March 2009 001 A RECURRENCE THEOREM ON THE SOLUTIONS TO THE 2D EULER EQUATION Y. CHARLES LI Abstract. In this article, I will prove

More information

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 7 Numerical Fluid Mechanics Fall 2011 Lecture 7 REVIEW of Lecture 6 Material covered in class: Differential forms of conservation laws Material Derivative (substantial/total derivative) Conservation of Mass

More information

NUMERICAL SIMULATION OF THE FLOW AROUND A SQUARE CYLINDER USING THE VORTEX METHOD

NUMERICAL SIMULATION OF THE FLOW AROUND A SQUARE CYLINDER USING THE VORTEX METHOD NUMERICAL SIMULATION OF THE FLOW AROUND A SQUARE CYLINDER USING THE VORTEX METHOD V. G. Guedes a, G. C. R. Bodstein b, and M. H. Hirata c a Centro de Pesquisas de Energia Elétrica Departamento de Tecnologias

More information

Chapter 8 Gradient Methods

Chapter 8 Gradient Methods Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point

More information

Turbulent drag reduction by streamwise traveling waves

Turbulent drag reduction by streamwise traveling waves 51st IEEE Conference on Decision and Control December 10-13, 2012. Maui, Hawaii, USA Turbulent drag reduction by streamwise traveling waves Armin Zare, Binh K. Lieu, and Mihailo R. Jovanović Abstract For

More information

Spline Element Method for Partial Differential Equations

Spline Element Method for Partial Differential Equations for Partial Differential Equations Department of Mathematical Sciences Northern Illinois University 2009 Multivariate Splines Summer School, Summer 2009 Outline 1 Why multivariate splines for PDEs? Motivation

More information

Computational Fluid Dynamics 2

Computational Fluid Dynamics 2 Seite 1 Introduction Computational Fluid Dynamics 11.07.2016 Computational Fluid Dynamics 2 Turbulence effects and Particle transport Martin Pietsch Computational Biomechanics Summer Term 2016 Seite 2

More information

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws Zhengfu Xu, Jinchao Xu and Chi-Wang Shu 0th April 010 Abstract In this note, we apply the h-adaptive streamline

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2016 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2

More information

2 Nonlinear least squares algorithms

2 Nonlinear least squares algorithms 1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information