Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms

Size: px
Start display at page:

Download "Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms"

Transcription

1 Convergence and Descent Properties for a Class of Multilevel Optimization Algoritms Stepen G. Nas April 28, 2010 Abstract I present a multilevel optimization approac (termed MG/Opt) for te solution of constrained optimization problems. Te approac assumes tat one as a ierarcy of models, ordered from fine to coarse, of an underlying optimization problem, and tat one is interested in finding solutions at te finest level of detail. In tis ierarcy of models calculations on coarser levels are less expensive, but also are of less fidelity, tan calculations on finer levels. Te intent of MG/Opt is to use calculations on coarser levels to accelerate te progress of te optimization on te finest level. Global convergence (i.e., convergence to a Karus-Kun-Tucker point from an arbitrary starting point) is ensured by requiring a single step of a convergent metod on te finest level, plus a line-searc for incorporating te coarse level corrections. Te convergence results apply to a broad class of algoritms wit minimal assumptions about te properties of te coarse models. I also analyze te descent properties of te algoritm, i.e., weter te coarse level correction is guaranteed to result in improvement of te fine level solution. Altoug additional assumptions are required to guarantee improvement, te assumptions required are likely to be satisfied by a broad range of optimization problems. 1 Introduction I present MG/Opt, a multilevel optimization approac originally developed for unconstrained optimization and ere extended to constrained optimization problems. It assumes tat one as a ierarcy of models, ordered from fine to coarse, of an underlying optimization problem, and tat one is interested in finding solutions at te finest level of detail. MG/Opt and related multilevel algoritms Systems Engineering and Operations Researc Dept., George Mason University, Fairfax, VA 22030, USA. snas@gmu.edu. Te material in te paper was supported by te Department of Energy under Award DE-SC

2 ave been successfully used to solve a variety of unconstrained problems min x f (x ) (1) were te subscript refers to te level in te ierarcy of models (see, e.g., [4, 11, 13]). Wen applied to appropriate problems, MG/Opt is capable of acieving te excellent computational performance of multigrid algoritms applied to elliptic PDEs. MG/Opt as also been applied to optimization models wit constraints in te case were te constraints are use to solve for some variables in terms of te oters, resulting in a reduced problem tat is effectively unconstrained [8, 14]. Here I extend MG/Opt to constrained optimization problems, allowing bot equality and inequality constraints: min x f (x ) subject to a (x ) = 0 c (x ) 0 (2) I also present convergence teorems for te resulting algoritms, along wit teorems sowing tat te searc directions produced by MG/Opt are descent directions wen appropriate assumptions are satisfied. Te MG/Opt algoritm for constrained problems is related to te algoritm in [7]. Te convergence teorem in te unconstrained case is more general tan earlier results for multilevel algoritms for unconstrained optimization [11, 14]; see also [4, 15] for convergence teorems for related multilevel algoritms. Te teorems for te constrained case are new. MG/Opt is based on te principles underlying te full approximation multilevel sceme for solving nonlinear PDEs [9]. Te results ere provide a framework for applying multilevel approaces to a broad range of optimization models. Te MG/Opt framework is general, in te sense tat it does not specify te underlying optimization algoritm, providing great flexibility in ow it is implemented. In particular it would be possible to coose an underlying optimization algoritm and implementation adapted to a particular optimization problem or computer arcitecture. Te results are developed in tree stages. Unconstrained problems are considered first. Tese results are of independent interest, and tey also illustrate te algoritm and teorems in teir simplest form. Ten I consider problems wit equality constraints, followed by inequality constraints. Te latter results are derived using te corresponding teorems for equality-constrained problems. Finally I summarize te overall algoritm for a problem wit a mix of equality and inequality constraints, in a form better suited for software implementation. 2 Unconstrained Problems In te unconstrained case te optimization problem is (1). To define and analyze te algoritm, it is only necessary to refer to two levels of models, wit 2

3 referring to te current finer level and H referring to te coarser level. Te algoritm requires tat te user provide a downdate operator I H and an update operator IH to transform vectors from one level to te oter. To specify te MG/Opt algoritm I make te following assumption Assumption A1: f (x ) is defined for all values of x. Altoug additional assumptions will be needed to prove convergence of te algoritm, and to prove tat te algoritm produces descent directions, tis is te only assumption needed to define and run te algoritm. Here is te MG/Opt algoritm for an unconstrained problem. Given an initial estimate of te solution x 0, and integers k 1, k 2 0 satisfying k 1 + k 2 > 0, for j = 0, 1,... until converged: Pre-smooting: Apply k 1 iterations of a convergent optimization algoritm to (1) to obtain x (wit x j used as te initial guess). Recursion: Compute x H = I H x and v H = f H ( x H ) I H f ( x ). Minimize (peraps approximately) te surrogate model f s (x H ) f H (x H ) v T Hx H to obtain x + H (wit x H used as te initial guess). Te minimization could be performed recursively by calling MG/Opt. Compute te searc directions e H = x + H x H and e = I H e H. Use a line searc to determine x + = x +αe satisfying f (x + ) f ( x ). Post-smooting: Apply k 2 iterations of te same convergent optimization algoritm to (1) to obtain x j+1 (wit x + used as te initial guess). Tis description of MG/Opt is useful for understanding and analyzing te algoritm. Te version of te algoritm in Section 5 is better suited for implementation purposes. Te vector v H is cosen so tat f s ( x H ) = I H f ( x ), tat is, te surrogate model matces te downdated fine model to first order. (It would be trivial to add a constant to te function f s so tat f s ( x H ) = f ( x ), ensuring tat te surrogate model matced bot function and gradient values. Tis would ave little effect on te optimization algoritms.) To prove convergence for MG/Opt, I make te following additional assumptions Assumption A2: Te level set S = {x : f(x ) f(x 0 )} is compact, were x 0 is te initial guess of te solution of (1). 3

4 Assumption A3: 2 f (x ) is continuous for all coices of x S. Tese are standard assumptions for proving convergence of algoritms for unconstrained optimization, bot for line searc and trust region algoritms (see, e.g., [6]). It would be possible to prove analogous convergence results wit a weaker version of assumption A3, namely tat te gradient f (x ) is Lipscitz continuous for all coices of x S. Te MG/Opt algoritm is flexible about te coice of te convergent optimization algoritm used in te pre-smooting and post-smooting steps. I will examine bot line searc and trust region algoritms as possibilities. Let Opt LS be a line searc algoritm, i.e., it computes a new estimate of te solution of te form x j+1 = x j + α jp j were p j is a searc direction and α j is a step lengt. I will assume tat α j is cosen to satisfy one of te Wolfe, strong Wolfe, or Goldstein conditions (see [12] for a definition of tese conditions). In addition I will assume tat algoritm Opt LS cooses te searc direction p j so tat is satisfies te condition p T j f (x j ) p j f (x j ɛ > 0. ) Tis condition is satisfied by many algoritms. Here is a convergence teorem for Opt LS. Teorem 1 Assume tat A1 A3 are satisfied. Suppose tat optimization algoritm Opt LS is used to solve (1). Ten Proof. See [12]. lim f (x j j ) = 0. It would also be possible to use a variety of trust-region metods. Suppose tat Opt T R is te trust-region metod for unconstrained optimization defined in [6]. Ten we ave te following teorem. Teorem 2 Assume tat A1 A3 are satisfied. Suppose tat optimization algoritm Opt T R is used to solve (1). Ten Proof. See [6]. lim f (x j j ) = 0. I immediately obtain te following convergence result for MG/Opt. Note tat te line searc used in te recursion step of te algoritm only requires tat te function value not increase. 4

5 Teorem 3 Assume tat A1, A2, and A3 are satisfied, and tat eiter Opt LS or Opt T R is te convergent optimization algoritm used in te pre- and postsmooting steps of MG/Opt. Ten MG/Opt is guaranteed to converge in te sense tat lim f (x j j ) = 0. Proof. Since k 1 + k 2 > 0, eac iteration of MG/Opt includes at least one iteration of te convergent optimization algoritm applied to (1). Te recursion step at worst results in no improvement to te value of te objective function. To prove convergence of MG/Opt, it is straigtforward to repeat te proof of convergence for eiter Opt LS [12] or Opt T R [6], taking into account tat at some iterations te new estimate of te solution as a lower function value tan tat obtained by te underlying optimization algoritm. Te convergence teorem applies to a more general algoritm of te following form: Pre-smooting: Apply k 1 iterations of Opt LS or Opt T R to (1) to obtain x (wit x j used as te initial guess). Recursion: Find a point x + satisfying f (x + ) f ( x ). Post-smooting: Apply k 2 iterations of te same optimization algoritm to (1) to obtain x j+1 (wit x + used as te initial guess). Tus convergence is guaranteed by te structure of te MG/Opt algoritm, and does not depend on te surrogate model used in te recursion step. Te performance of te algoritm, owever, is strongly dependent on te coices of te surrogate model and te update and downdate operators I H and IH. Te MG/Opt algoritm above requires tat te objective function not increase in te Recursion step. Tis requirement could be relaxed in te context of an optimization algoritm based on a non-monotone line searc; see, for example, [5]. My next goal is to determine under wat conditions te searc direction e from te recursion step of MG/Opt is guaranteed to be a descent direction for f at x : f ( x + ɛe ) < f ( x ) for sufficiently small ɛ > 0; or, alternatively: f ( x ) T e < 0. For tis purpose I make te following additional assumption: Assumption A4: (I H )T = C I I H for some constant C I > 0. 5

6 If te surrogate model is minimized exactly ten f H (x + H ) = v H = f H ( x H ) I H f ( x ). If te surrogate model is only minimized approximately ten f H (x + H ) = f H( x H ) I H f ( x ) + z for some z. We can write tis final equation as I obtain te following teorem. f s (x + H ) = z. Teorem 4 Assume tat A1 A4 are satisfied, and tat f ( x ) 0. Ten te searc direction e from te recursion step of MG/Opt will be a descent direction for f at x if (a) f s (x + H ) is sufficiently small, and (b) if for 0 η 1. e T H 2 f H ( x H + ηe H )e H > 0 Proof. We test for a descent direction as follows: f ( x ) T e = f ( x ) T [I H(x + H x H)] = C I [I H f ( x )] T (x + H x H) = C I [ f H ( x H ) f H (x + H ) + z]t (x + H x H) = C I [ f H ( x H ) f H (x + H )]T e H + C I z T e H. To analyze te first term in te last formula I use te mean-value teorem. If I define te real-valued function F (y) by ten F (y) [ f H ( x H ) f H (y)] T e H F ( x H + e H ) = F ( x H ) + F (ξ) T e H = e T H 2 f H (ξ)e H were ξ = x H + ηe H for some 0 η 1. Tus f ( x ) T e = C I [ f H ( x H ) f H (x + H )]T e H + C I z T e H = C I e T H 2 f H (ξ)e H + C I z T e H. Te teorem follows from tis last formula. Bot te additional assumptions in te teorem are necessary. If we do not minimize te surrogate model accurately enoug ten te point x + H could be almost arbitrary, so tere would be no guarantee tat e would be a descent direction. Te assumption tat e T H 2 f H ( x H +ηe H )e H > 0 is also needed. In particular e H 0. One also needs tat 2 f H is positive definite along te line segment 6

7 connecting x H and x + H. For example, consider te one-dimensional example wit v H = 0 f s (x H ) = f H (x H ) = x 3 H x H wit x H = 1 and x + H = 1/ 3, a local minimizer of f s. Ten e H = 1+1/ 3 > 0 and f s( x H ) = 3 x 2 H 1 = 2 > 0 so e H is an ascent direction at x H. Hence bot assumptions in te teorem are necessary. One can guarantee descent in a different way by using a variant of MG/Opt were te recursion step is modified to: Obtain x + H by solving min x H subject to f s (x H ) f H (x H ) v T H x H x H x H for some value of > 0. Te following teorem is obtained. (3) Teorem 5 Assume tat A1 A4 are satisfied, and tat te recursion step in MG/Opt includes te constraint x H x H. If I H f ( x ) 0 and is sufficiently small, ten e is a descent direction for f at x. Proof. If 0 ten, in te limit, e H is proportional to te steepest-descent direction p = f s ( x H ) = I H f ( x ), were te final formula follows from te definition of v H. If is sufficiently small ten for some positive scalar γ. e T f ( x ) = (I He H ) T f ( x ) = C I e T HI H f ( x ) γc I I H f ( x ) 2 2 < 0 Te version of MG/Opt used in [8] includes a constraint of te form x H x H. Tis is equivalent to adding bound contraints on te variables in te surrogate model. In te descent teorems, I made te assumption tat te update and downdate operators satisfy (I H) T = C I I H. Tis assumption is used to guarantee tat te recursion step in MG/Opt produces a descent direction. Suppose instead tat (I H) T = MI H were M is a positive definite matrix. Ten repeating te proof of Teorem 4 gives f ( x ) T e = e T H[ 2 f H (ξ)m]e H + z T Me H. Even if 2 f H (ξ) is positive definite, te product B [ 2 f H (ξ)m] will be positive definite if and only if B is a normal matrix [10]. For general coices of M tis is not guaranteed to be true. 7

8 3 Equality Constraints I now consider an optimization problem wit equality constraints: min x f (x ) subject to a (x ) = 0 (4) were te subscript refers to te level of te model. Also provided are an downdate operator I H and an update operator I H for te variables, as well as a downdate operator J H and an update operator J H for te constraints. In te case were te number of constraints remains te same on te fine and coarse levels, J H = J H = I. To specify te MG/Opt algoritm for tis case I make te following assumption: Assumption B1: x. f (x ) and a (x ) are defined for all coices of I define te Lagrangian function as L (x, λ ) = f (x ) + a (x ) T λ were λ are Lagrange multipliers for te constraints. As before te MG/Opt algoritm is defined in terms of a convergent optimization algoritm tat can be applied to (4). Here I assume tat tis algoritm is based on a merit function M (x ). Merit functions are usually cosen in suc a way tat local solutions to (4) correspond to local minimizers of te merit function [12]; in some cases it is possible to prove tat local minimizers of te merit function correspond to local solutions to (4) (see, e.g., [1]). Here is te MG/Opt algoritm for an equality-constrained problem. Given an initial estimate of te solution (x 0, λ0 ), and integers k 1, k 2 0 satisfying k 1 + k 2 > 0, for j = 0, 1,... until converged: Pre-smooting: Apply k 1 iterations of a convergent optimization algoritm to (4) to obtain ( x, λ ) (wit (x j, λj ) used as te initial guess), were te convergent optimization algoritm is based on a merit function M (x ). Recursion: Compute x H = I H x, λ H = J H λ, v H = x L H ( x H, λ H ) I H x L ( x, λ ), and s = a H ( x H ) J H a ( x ). Minimize (peraps approximately) te surrogate model f s (x H ) f H (x H ) v T Hx H 8

9 subject to te surrogate constraints a s (x H ) a H (x H ) s = 0 to obtain (x + H, λ+ H ) (wit ( x H, λ H ) used as te initial guess). Te minimization could be performed recursively by calling MG/Opt. Compute te searc directions e H = x + H x H and e = I H e H. Use a line searc to determine x + = x +αe satisfying M (x + ) M ( x ). Compute Lagrange multiplier estimates λ +. Post-smooting: Apply k 2 iterations of te same convergent optimization algoritm to (4) to obtain (x j+1, λ j+1 ) (wit (x +, λ+ ) used as te initial guess). Corresponding to te surrogate model and constraints in te recursion step, I define te surrogate Lagrangian as L s (x H, λ H ) f s (x H, λ H ) + a s (x H ) T λ H. It is easy to ceck tat ( L s ( x H, λ I H H ) = 0 0 J H ) L ( x, λ ) were te gradient is taken wit respect to bot x and λ. In tis sense te surrogate model is a first-order approximation to te downdated fine-level model. Notice tat te surrogate model as te same form as te original model. Te objective is sifted by a linear term v T H x H and te constraints are sifted by a constant vector s. Tus if te original model (4) as linear constraints ten so does te surrogate model. If te original objective is a quadratic function ten so is te objective of te surrogate model. And so fort. Tus te same optimization algoritm can be applied to solve te surrogate model as is used in te pre- and post-smooting steps. Tis is also true for te oter versions of MG/Opt tat I discuss. It is possible to prove convergence for MG/Opt muc as in te unconstrained case. A common approac to proving convergence for constrained optimization algoritm is to sow tat lim M (x j j ) = 0. Tat is, te algoritm guarantees convergence to a stationary point of te merit function. If, for example, a typical line searc algoritm is used as te underlying optimization algoritm in MG/Opt, ten it would be straigtforward to modify te proof of convergence for tat algoritm to incorporate te possibility of te recursion step in MG/Opt. For tat reason I will focus on weter te searc direction e from te recursion step of MG/Opt is guaranteed to be a descent direction for te merit function M at x. I make te following assumptions. 9

10 Assumption B2: All of te iterates on level lie in a compact set S. Assumption B3: levels. f is twice continuously differentiable on S on all Assumption B4: a is continuously differentiable on S on all levels. Assumption B5: Te smallest singular value of a is uniformly bounded away from zero on S on all levels. Assumption B6: At te end of te Pre-smooting step in MG/Opt, te multipliers λ satisfy λ = µ( x ), were µ(x ) is te least-squares multiplier estimate at x (see below). Assumption B7: Te update and downdate operators satisfy for constants C I, C J > 0. (I H) T = C I I H and (J H) T = C J J H Assumptions B2, B3, B4, and B7 are routine. Assumption B5 is a constraint qualification used to guarantee tat te Lagrange multiplier estimates are bounded. Assumption B6 is easy to guarantee by computing λ = µ( x ) if tis is not done already by te optimization algoritm used in te pre-smooting step. If te surrogate model is minimized exactly ten Hence x L H (x + H, λ+ H ) = v H = x L H ( x H, λ H ) I H x L ( x, λ ). I H x L ( x, λ ) = [ x L H ( x H, λ H ) x L H (x + H, λ H )] + a H (x + H )( λ H λ + H ). If te surrogate model is only minimized approximately ten x L H (x + H, λ+ H ) = v H + z 1 for some z 1. Tis condition can be written as x L s (x + H, λ+ H ) = z 1 (5) were L s is te Lagrangian for te surrogate model and constraints. If te surrogate constraints are exactly satisfied, ten so tat a H (x + H ) = s = a H( x H ) J H a ( x ) J H a ( x ) = a H ( x H ) a H (x + H ). If te constraints are not exactly satisfied ten J H a ( x ) = a H ( x H ) a H (x + H ) + z 2 10

11 for some vector z 2. Tis condition can be written as a s (x + H ) = z 2. (6) I will consider an augmented-lagrangian merit function M (x ) f (x ) + a (x ) T µ(x ) + ρ 2 a (x ) T a (x ), were µ(x ) is te least-squares estimate of te Lagrange multipliers at x : µ(x ) [ a (x ) a (x ) T ] 1 a (x ) f (x ). I obtain te following teorem. Teorem 6 Assume tat B1 B7 are satisfied. Te searc direction e from te recursion step of MG/Opt will be a descent direction wit respect to te augmented Lagrangian function M if (a) te penalty parameter ρ is sufficiently large, (b) [ a ( x )I H J H a H( x H )] T a ( x ) is sufficiently small, (c) a H ( x H ) a H ( x H + αe H ) is sufficiently small for 0 α 1, (d) x L s (x + H, λ+ H ) is sufficiently small, (e) e T H P 2 xxl H (ξ, λ H )P e H > 0 for x H ξ x + H were P is a projection onto te null-space for te Jacobian of te constraints at x H, and (f) a s (x + H ) is sufficiently small. Proof. First note tat λ = µ( x ) because of Assumption B6. I test for descent by analyzing were e T M ( x ) = [I He H ] T M ( x ) = e T H[C I I H x L ( x, λ ) + ρ(i H) T a ( x )a ( x )] + e T µ( x )a ( x ) C I T 1 + ρt 2 + T 3, T 1 = e T HI H x L ( x, λ ) T 2 = e T H(I H) T a ( x )a ( x ) T 3 = e T µ( x )a ( x ). I now analyze te terms T 1, T 2, and T 3. First for T 1 : T 1 = e T HI H x L ( x, λ ) = e T H[ x L H ( x H, λ H ) x L H (x + H, λ H ) + z 1 ] + e T H a H (x + H )( λ H λ + H ) = e T H 2 xxl H (ξ, λ H )e H + e T Hz 1 + e T H a H (x + H )( λ H λ + H ) T 1a + T 1b + T 1c. 11

12 Te vector z 1 comes from (5). In te analysis, I ave used te mean-value teorem. Te point ξ is on te line segment connecting x H and x + H. I will discuss te first term T 1a in connection wit term T 2a below. Te second term T 1b will be small if x L s (x + H, λ+ H ) is small, i.e., if te coarse-level optimization problem is solved accurately enoug. Te tird term T 1c will be bounded because of Assumptions B2, B4, and B5, and assumption (c) of te teorem. Now I analyze T 2 : T 2 = e T H(I H) T a ( x )a ( x ) = e T H[ a ( x ) T I H] T a ( x ) = e T H[J H a H ( x H ) T ] T a ( x ) + e T H[ a ( x ) T I H J H a H ( x H ) T ] T a ( x ) = C J e T H a H ( x H )J H a ( x ) + e T H[ a ( x ) T I H J H a H ( x H ) T ] T a ( x ) = C J e T H a H ( x H )[a H ( x H ) a H (x + H )] + C Je T H a H ( x H )z 2 + e T H[ a ( x ) T I H J H a H ( x H ) T ] T a ( x ) = C J e T H a H ( x H ) a H (η) T e H + C J e T H a H ( x H )z 2 + e T H[ a ( x ) T I H J H a H ( x H ) T ] T a ( x ) = C J a H ( x H ) T e H C J e T H a H ( x H )[ a H ( x H ) a H (η)] T e H + C J e T H a H ( x H )z 2 + e T H[ a ( x ) T I H J H a H ( x H ) T ] T a ( x ) T 2a + T 2b + T 2c + T 2d. Te vector z 2 comes from (6). I ave again used te mean-value teorem. Te point η is on te line segment connecting x H and x + H. We can examine te terms T 1a and T 2a togeter: C I T 1a + ρt 2a = C I e T H 2 xxl H (ξ, λ H )e H ρc J a H ( x H ) T e H 2 2 e T HW e H were W = C I 2 xxl H (ξ, λ H ) + ρc J a H ( x H ) a H ( x H ) T. Te matrix W is similar in structure to te Hessian of an augmented-lagrangian function, and ence is positive definite for ρ sufficiently large if Assumption B5 and assumption (e) above are satisfied (see, e.g., [6]). Hence C I T 1a + ρt 2a is negative for ρ sufficiently large. Te second term T 2b will be small if a H ( x H ) a H (η); if te constraints are linear tis term will be zero. Te tird term T 2c will be (nearly) zero if te coarse-level constraints are (nearly) satisfied. Te fourt term T 2d will be small if a ( x ) T I H J H a H( x H ) T (tis is a measure of ow well te coarse-level constraints approximate te fine-level constraints), or if a ( x ) is small. 12

13 Te term T 3 will be bounded because of te assumptions made at te beginning of tis section [2]. More can be said about tis term. If a ( x ) = 0 ten T 3 = 0; oterwise tis term is dominated by e T H W e H if ρ is sufficiently large. Te teorem follows from tese statements. Let me comment on reasonableness of te additional assumptions in te teorem. Assumption (a) can be dealt wit troug an appropriate implementation of te algoritm, and is a common assumption in te context of constrained optimization. Assumption (b) states tat eiter te constraints are nearly satisfied, or tat te coarse and fine level constraints are good approximations to eac oter in te sense tat a ( x )IH J H a H( x H ). Assumption (c) limits te nonlinearity of te constraints, and would restrict ow large α could be. Assumption (d) states tat te coarse level model is solved to sufficient accuracy. Assumption (e) is analogous to assumption (b) in Teorem 4; see te discussion in Section 2. Assumption (f) states tat te constraints are nearly satisfied. If constraints are linear, ten te Jacobian of te constraints will be constant on every level, and te assumption (c) in te teorem is unnecessary. Also, many classes of algoritms are able to ensure tat linear constraints are satisfied at every iteration, and in tat case assumptions (b) and (f) would also be unnecessary. In te case of linear constraints, it is common to insist tat te constraints remain satisfied at every iteration. As a consequence A H e H = 0 and A e = 0. Standard optimization tecniques can be used to guarantee tat A H e H = 0. If in addition A I H = J H A H ten A e = A I He H = J HA H e H = 0 as well. Furter, if te constraints are always satisfied, ten te merit function simplifies to M (x, λ ) = f (x ) and proving descent is analogous to te unconstrained case. If te number of constraints is te same on all levels, i.e., J H = J H = I, ten tere is a sligt simplification in te result. Te second assumption becomes: (b) [ a ( x )IH a H( x H )] T a ( x ) is sufficiently small. 3.1 Te l 1 Merit Function Anoter commonly used merit function is te l 1 merit function: M (x ) = f (x ) + ρ a (x ) 1. In te context of sequential quadratic programming metods, it is possible to prove descent wit respect to te l 1 merit function [3], and to obtain convergence results analogous to tose for te augmented-lagrangian merit function. 13

14 However, te searc direction from MG/Opt is not guaranteed to be a descent direction for te l 1 merit function, as te following example demonstrates. Te example as a quadratic objective and linear constraints: min u,f [ 1 1 (u u ) 2 dx ] (f f ) 2 dx subject to u (x) = f(x) + b (x), 0 < x < 1 wit u(0) = u(1) = 0. Te functions u (x), f (x), and b (x) are specified below. To obtain te finite-dimensional models, I use a uniform discretization on te interval [0, 1]. I coose evenly spaced points x 0 = 0 < x 1 < < x n < x n+1 = 1, were x i x i 1 =. Ten u i u(x i ) and f i f(x i ) for 1 i n. If I set u 0 = u n+1 = 0 ten for 1 i n: u i 1 + 2u i u i 1 2 = f i + b i. I use te trapezoid rule to approximate te integrals in te objective function, since it as te same order of accuracy as te solution to te differential equation constraint. Te fine-level model uses te discretization = 1/16, and te coarse-level model uses H = /2. Te functions u, f, and b are u (x) = 1 + x 2 f (x) = cos(x) b (x) = 20x(x 1)(x 0.1)(x 0.7) Te penalty parameter in te merit function is ρ = 100. Te goal of tese tests is to study te descent properties of te searc direction from te Recursion step of MG/Opt. For tat reason, te tests specify te value of x, solve te coarse-level subproblem exactly, compute te searc direction e, and ten plot te values of M ( x + αe ) for 0 α 1. I coose x = x w were x is te solution to te fine-level problem and w is a random vector obtained using te Matlab commands: randn( state,4) (7) w = randn(n,1) (8) Here n is te number of variables on te fine level. Figure 1 sows te results for te l 1 merit function were te searc direction from MG/Opt is an ascent direction. Figures 2 and 3 plot te values of te objective function and te penalty term, respectively. Altoug te objective function is decreasing, te penalty term is increasing, so it is not possible to get descent for te l 1 merit function by increasing te penalty parameter. 14

15 5300 L 1 merit function Figure 1: l 1 Merit Function (near solution) 4 Inequality Constraints I now consider an optimization problem wit inequality constraints: min y g (y ) subject to c (y ) 0 (9) were te subscript refers to te level of te model. Tis problem uses different notation tan before, because I will transform it to an equality-constrained problem, and te transformed problem will use te notation used earlier. As before, also provided are a downdate operator I H and an update operator I H for te variables, as well as a downdate operator J H and an update operator JH for te constraints. To specify te MG/Opt algoritm I make te following assumption: Assumption C1: g (y ) and c (y ) are defined for all coices of y. I define te Lagrangian function as ˆL (y, λ ) = g (y ) + c (y ) T λ. As in te equality-constrained case, te algoritm is defined in terms of a convergent optimization algoritm tat can be applied to (9), and tat algoritm is based on a merit function ˆM (y ). Here is te MG/Opt algoritm for an inequality-constrained problem. Given an initial estimate of te solution (y 0, λ0 ), and integers k 1, k 2 0 satisfying k 1 + k 2 > 0, for j = 0, 1,... until converged: 15

16 Objective value Figure 2: Objective Function Pre-smooting: Apply k 1 iterations of a convergent optimization algoritm to (9) to obtain (ȳ, λ ) (wit (y j, λj ) used as te initial guess), were te convergent optimization algoritm is based on a merit function ˆM (y ). Recursion: Compute ȳ H = I H ȳ, λh = J H λ, ˆv H = y ˆLH (ȳ H, λ H ) I H y ˆL (ȳ, λ ), and ŝ = c H (ȳ H ) J H c (ȳ ). Minimize (peraps approximately) te surrogate model g s (y H ) g H (y H ) ˆv T Hy H subject to te surrogate constraints c s (y H ) c H (y H ) ŝ 0 to obtain (y + H, λ+ H ) (wit (ȳ H, λ H ) used as te initial guess). Te minimization could be performed recursively by calling MG/Opt. Compute te searc directions e H = y + H ȳ H and e = I H e H. Use a line searc to determine y + = ȳ +αe satisfying ˆM (y + ) ˆM (ȳ ). Compute Lagrange multiplier estimates λ +. Post-smooting: Apply k 2 iterations of te same convergent optimization algoritm to (9) to obtain (y j+1, λ j+1 ) (wit (y +, λ+ ) used as te initial guess). Corresponding to te surrogate model and constraints in te recursion step, I define te surrogate Lagrangian as ˆL s (y H, λ H ) g s (y H, λ H ) + c s (y H ) T λ H. 16

17 5300 Penalty term Figure 3: l 1 Penalty Term Te surrogate optimization model is cosen so tat it is a first-order approximation to te downdated fine-level model in te sense tat ( ) ˆL s (ȳ H, λ I H H ) = 0 0 J H ˆL (ȳ, λ ). Here te gradient is wit respect to bot te variables y H and te multipliers λ H. It will be useful in te later discussion to derive te above algoritm in anoter way. In te case were J H = J H = I, te algoritm above can be obtained by considering te following equality-constrained problem were min x f (x ) subject to a (x ) = 0 x = ( y z ) f (x ) = f (y, z ) = g (y ) a (x ) i = c (x ) i + (z ) 2 i, i.e., I ave used squared slack variables to convert te inequalities to equations, an approac tat is also used in [1]. Te results for equality-constrained problems can be applied to te transformed problem. In te following, I use Z to represent te diagonal matrix wit diagonal entries equal to z, and similarly for Z, etc. Wit tis notation te constraints for te transformed problem can be written as a (x ) = c (x ) + Z z = 0. 17

18 To derive te surrogate model, look at te Lagrangian for te transformed problem: Ten and L (x, λ ) = f (x ) + a (x ) T λ = g (y ) + [c (y ) + Z z ] T λ. y L ( x, λ ) = g (ȳ ) + c (ȳ ) T λ z L ( x, λ ) = 2 Z λ. If te complementary slackness conditions are satisfied, ten Similarly, and = I. Tus in te notation of te equality- since in tis special case JH = J H constrained version of MG/Opt: z L ( x, λ ) = 0. y L H ( x H, λ H ) = g H (ȳ H ) + c H (ȳ H ) λ H z L H ( x H, λ H ) = 2 Z H λh = 2 Z λ = 0, v H = x L ( x H, λ H ) I H x L ( x, λ ) ( y L ( x H, λ H ) I H = yl ( x, λ ) ) z L ( x H, λ H ) I H zl ( x, λ ) ( y ˆL (ȳ = H, λ H ) I H ˆL y (ȳ, λ ) ) = 0 ( ˆvH 0 Hence te objective function for te coarse-level problem is f H (x ) v T H x H = g H (y H ) ˆv T H y H, as stated above. Te coarse-level constraints for te transformed problem are 0 = a s (x H ) = a H (x H ) s Hence we obtain = a H (x H ) [a H ( x H ) a ( x )] ). = c H (y H ) + Z H z H [(c H (ȳ H ) + Z H z H ) (c (ȳ ) + Z z )] = c H (y H ) + Z H z H [(c H (ȳ H ) + Z H z H ) (c (ȳ ) + Z H z H )] = c H (y H ) [(c H (ȳ H ) (c (ȳ )] + Z H z H = c H (y H ) ŝ + Z H z H. c H (y H ) ŝ 0, wic are te constraints stated in te MG/Opt algoritm above. Note tat, because z = z H, we ave tat s = ŝ. I will use tis equality-constrained formulation again below. But let me empasize tat te squared slack variables z are only used for te purpose 18

19 of deriving te MG/Opt algoritm, and for analyzing its beavior. It is not assumed tat te optimization algoritms use squared slack variables. Convergence teorems for MG/Opt can be obtained as in te equality constrained case. Hence my main focus is to determine under wat conditions te searc direction e from te recursion step of MG/Opt is guaranteed to be a descent direction for te merit function ˆM at ȳ. I will make te following assumptions (tese are similar to te assumptions made in te equality constrained case): Assumption C2: All of te iterates on level lie in a compact set S. Assumption C3: levels. g is twice continuously differentiable on S on all Assumption C4: c is continuously differentiable on S on all levels. Assumption C5: Te smallest singular value of c is uniformly bounded away from zero on S on all levels, were c is te set of active and violated constraints. Assumption C6: At te end of te Pre-smooting step in MG/Opt, te multipliers λ satisfy λ = ˆµ(ȳ ), were ˆµ(y ) is te least-squares multiplier estimate at y (see below). Assumption C7: for constants C I, C J > 0. Te update and downdate operators satisfy (I H) T = C I I H and (J H) T = C J J H Te multipliers are computed using te least-squares formula from te last section, based on te current set of active constraints. Multipliers for inactive constraints are zero. In te following I will refer bot to te original optimization problem (9) as well as te corresponding equality-constrained problem involving squared slack variables. I will also define te set of constraint violations ĉ (y ) = max{c (y ), 0}. We can write te constraints in tree different ways: c (y ) 0 a (x ) = c (y ) + Z z = 0 ĉ (y ) = 0 Tere will be analogous definitions for te coarse-level model. Te discussion below does not assume tat J H = J H = I. If te surrogate model is minimized exactly ten y ˆLs (y + H, λ+ H ) = 0 were L s is te Lagrangian for te surrogate model, but in general y ˆLs (y + H, λ+ H ) = z 1 (10) 19

20 for some z 1. Similarly if te surrogate constraints are exactly satisfied ten a s (x + H ) = 0 but more generally a s (x + H ) = z 2 (11) for some z 2. I will consider an augmented-lagrangian merit function ˆM (y ) = g (y ) + ĉ (y ) T ˆµ(y ) + ρ 2 ĉ(x ) T ĉ (x ), were ˆµ(y ) is te least-squares estimate of te Lagrange multipliers at y. In terms of te transformed model we ave tat ĉ (y ) = c (y ) + Z z = a (x ), assuming tat te slack variables are defined appropriately. Tus, if we define µ(x ) = ˆµ(y ) ten we can define te merit function for te transformed problem as M (x ) f (x ) + a (x ) T µ(x ) + ρ 2 a (x ) T a (x ). Before analyzing te descent properties of MG/Opt, I derive formulas for te gradient of te merit function at x = (ȳ, z ): z M (ȳ, z ) = 2 Z µ(ȳ, z ) + 2ρ Z [c (ȳ ) + Z z ] = 2 Z λ + 2ρ Z [c (ȳ ) + Z z ]. As discussed earlier, Z λ = 0 because of complementary slackness. Te oter term is also zero because if ( z ) i 0 ten c (ȳ ) i + ( z ) 2 i = 0. Hence In addition z M (ȳ, z ) = 0. y M (ȳ, z ) = g (ȳ ) + ρ c (ȳ )[c (ȳ ) + Z z ] + c (ȳ ) λ + µ(ȳ, z )[c (ȳ ) + Z z ] = y f (ȳ, z ) + y a (ȳ, z ) λ + ρ y a (ȳ, z )a (ȳ, z ) + µ(ȳ, z )a (ȳ, z ) = y L ( x, λ ) + ρ y a ( x )a ( x ) + µ( x )a ( x ). If we test for descent for te transformed problem ten te searc direction on te fine level is ( ) e p z + z. However, since z M (ȳ, z ) = 0, we ave tat p T M (ȳ, z ) = e T y M (ȳ, z ) = e T ˆM (ȳ ). 20

21 As a result, we can take advantage of te analysis from te equality-constrained case. Te teorem for equality constraints applies immediately, but its assumptions involve derivatives wit respect to all of te variables for te transformed problem. However, as te above analysis indicates, te only non-zero terms are associated wit te derivatives wit respect to te variables y and y H, and not te derivatives wit respect to te slack variables z and z H. Tus we obtain te teorem below. Teorem 7 Assume tat C1 C7 are satisfied. Te searc direction e from te recursion step of MG/Opt will be a descent direction wit respect to te augmented Lagrangian function ˆM if (a) te penalty parameter ρ is sufficiently large; (b) [ c (ȳ )I H c H(ȳ H )] T ĉ (ȳ ) is sufficiently small, (c) c H (ȳ H ) c H (ȳ H + αe H ) is sufficiently small for 0 α 1, (d) y ˆLs (y + H, λ+ H ) is sufficiently small, (e) e T H P 2 xxl H (ξ, λ H )P e H > 0 for ȳ H ξ y + H were P is a projection onto te null space for te Jacobian of te active constraints at ȳ H. and (e) ĉ s (y + H ) is sufficiently small were ĉ s corresponds to te constraint violations in te surrogate model. If te constraints are linear, ten te Jacobian of te constraints will be constant on every level, and te assumption (c) is unnecessary. Also, many classes of algoritms are able to ensure tat linear constraints are satisfied at every iteration, and in tat case assumptions (b) and (e) would also be unnecessary. If te constraints are always satisfied, ten te merit function simplifies to M (x, λ ) = f (x ) and proving descent is analogous to te unconstrained case. Tis will be true for some algoritms in te case of linear constraints. It will also be true if interior-point metods are used and te iterates are feasible. 5 Summary of MG/Opt Algoritm Te earlier description isolates te essentials of te algoritm, in a form suitable for analyzing convergence properties. Te following description is more useful for purposes of implementation. It applies to te general optimization problem (2), and assumes te availability of appropriate update and downdate operators: I H and IH for te variables x, J H and J H for te equality constraints a, and K H and K H for te inequality constraints c. I will use λ to refer to te 21

22 multipliers for te equality constraints and µ to refer to te multipliers for te inequality constraints. Te Lagrangian for (2) is L (x, λ, µ ) = f (x ) + a (x ) T λ + c (x ) T µ. Te algoritm also assumes te availability of a convergent optimization algoritm Opt defined as a function of te form (x +, λ +, µ + ) Opt(f( ), v, a( ), s a, c( ), s c, x, λ, µ, k) wic applies k iterations of a convergent optimization algoritm to te problem min x f(x) v T x subject to a (x ) s a = 0 c (x ) s c 0 wit initial guess ( x, λ, µ) to obtain (x +, λ +, µ + ). If te parameter k is omitted, te optimization algoritm continues to run until its termination criteria are satisfied. Te algoritm Opt is assumed to be based on a merit function M (x ). Te algoritm MG/Opt as non-negative integer parameters k 1 and k 2 satisfying k 1 + k 2 > 0. It is straigtforward to modify tis algoritm to apply to an unconstrained problem or a problem wit only equality constraints. In tose cases te optimization algoritm Opt and its calling sequence would be simplified. In te unconstrained case te merit function would just be te objective function. Tere is considerable flexibility in ow te algoritm is implemented. Te convergence of MG/Opt only depends on te convergence of te underlying algoritm used for optimization on te finest level. Hence it would be possible to cange te values of k 1 and k 2 from iteration to iteration. Tis migt be appropriate if te initial guess were poor, and it was desirable to use a lower-cost metod at points far from te solution. It would also be possible to adjust te caracteristics of te underlying optimization metod, as long as te convergence guarantees were maintained. Here ten is te algoritm: Given an initial estimate of te solution (x 0, λ0, µ0 ), set v = 0, s a, = 0, and s,c = 0. Ten for j = 0, 1,..., set (x j+1, λ j+1, µ j+1 ) MG/Opt(f ( ), v, a ( ), s a,, c ( ), s c,, x j, λj, µj ) were te function MG/Opt is defined as follows: Coarse-level solve: If on te coarsest level, (x j+1, λ j+1, µ j+1 ) Opt(f ( ), v, a ( ), s a,, c ( ), s c,, x j, λj, µj ). Oterwise, Pre-smooting: ( x, λ, µ ) Opt(f ( ), v, a ( ), s,a, c ( ), s c,, x j, λj, µj, k 1) 22

23 Recursion: Compute x H = I H x λ H = J H λ µ H = K H µ v H = I H v + L H ( x H, λ H, µ H ) I H L ( x, λ, µ ) s a,h = J H s a, + a H ( x H ) J H a ( x ) s c,h = K H s c, + c H ( x H ) K H c ( x ) Apply MG/Opt recursively to te surrogate model: (x +, λ+, µ+ ) MG/Opt(f H( ), v H, a H ( ), s a,h, c H ( ), s c,h, x H, λ H, µ H ) Compute te searc directions e H = x + H x H and e = I H e H. Use a line searc to determine x + = x + αe satisfying M (x + ) M ( x ). Compute te new multipliers λ + and µ+. Post-smooting: (x j+1, λ j+1, µ j+1 ) Opt(f ( ), v, a ( ), s a,, c ( ), s c,, x +, λ+, µ+, k 2) 6 Acknowledgements I would like to tank Paul Boggs, David Gay, and Micael Lewis for teir many elpful comments, and in particular to tank Micael Lewis for suggesting te example used in Section 3.1. References [1] P. T. Boggs, A. J. Kearsley, and J. W. Tolle, A global convergence analysis of an algoritm for large-scale nonlinear optimization problems, SIAM Journal on Optimization, 9 (1999), pp [2] P. T. Boggs and J. W. Tolle, Sequential quadratic programming, Acta Numerica, 4 (1995), pp [3] R. H. Byrd and J. Nocedal, An analysis of reduced Hessian metods for constrained optimization, Matematical Programming, 49 (1991), pp [4] S. Gratton, A. Sartenaer,, and P. L. Toint, Recursive trust-region metods for multilevel nonlinear optimization, SIAM Journal on Optimization, 19 (2008), pp

24 [5] L. Grippo, F. Lampariello, and S. Lucidi, A nonmonotone line searc tecnique for Newton s metod, SIAM Journal on Numerical Analysis, 23 (1986), pp [6] I. Griva, S. G. Nas, and A. Sofer, Linear and Nonlinear Optimization, SIAM, Piladelpia, [7] N. Kydes, A Multigrid Solution of te Continuous Dynamic Disequilibrium Network Design Problem, PD tesis, Scool of Information Tecnology and Engineering, George Mason University, Fairfax, Virginia, [8] R. M. Lewis and S. G. Nas, Model problems for te multigrid optimization of systems governed by differential equations, SIAM Journal on Scientific Computing, 26 (2005), pp [9] S. F. McCormick, Multilevel Projection Metods for Partial Differential Equations, Society for Industrial and Applied Matematics, [10] A. Meenaksi and C. Rajian, On a product of positive semidefinite matrices, Linear Algebra and its Applications, 295 (1999), pp [11] S. G. Nas, A multigrid approac to discretized optimization problems, Journal of Computational and Applied Matematics, 14 (2000), pp [12] J. Nocedal and S. Wrigt, Numerical Optimization, Springer Series in Operations Researc, Springer, New York, [13] M. P. Rumpfkeil and D. J. Mavriplis, Optimization-based multigrid applied to aerodynamic sape design, tec. report, Department of Mecanical Engineering, University of Wyoming, Laramie, [14] M. Vallejos and A. Borzì, Multigrid optimization metods for linear and bilinear elliptic optimal control problems, Computing, 82 (2008), pp [15] Z. Wen and D. Goldfarb, A line searc multigrid metod for largescale convex optimization, tec. report, Department of IEOR, Columbia University,

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Multigrid Methods for Obstacle Problems

Multigrid Methods for Obstacle Problems Multigrid Metods for Obstacle Problems by Cunxiao Wu A Researc Paper presented to te University of Waterloo in partial fulfillment of te requirement for te degree of Master of Matematics in Computational

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

ADAPTIVE MULTILEVEL INEXACT SQP METHODS FOR PDE CONSTRAINED OPTIMIZATION

ADAPTIVE MULTILEVEL INEXACT SQP METHODS FOR PDE CONSTRAINED OPTIMIZATION ADAPTIVE MULTILEVEL INEXACT SQP METHODS FOR PDE CONSTRAINED OPTIMIZATION J CARSTEN ZIEMS AND STEFAN ULBRICH Abstract We present a class of inexact adaptive multilevel trust-region SQP-metods for te efficient

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Preconditioning in H(div) and Applications

Preconditioning in H(div) and Applications 1 Preconditioning in H(div) and Applications Douglas N. Arnold 1, Ricard S. Falk 2 and Ragnar Winter 3 4 Abstract. Summarizing te work of [AFW97], we sow ow to construct preconditioners using domain decomposition

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

New Streamfunction Approach for Magnetohydrodynamics

New Streamfunction Approach for Magnetohydrodynamics New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

arxiv: v1 [math.na] 7 Mar 2019

arxiv: v1 [math.na] 7 Mar 2019 Local Fourier analysis for mixed finite-element metods for te Stokes equations Yunui He a,, Scott P. MacLaclan a a Department of Matematics and Statistics, Memorial University of Newfoundland, St. Jon

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Lines, Conics, Tangents, Limits and the Derivative

Lines, Conics, Tangents, Limits and the Derivative Lines, Conics, Tangents, Limits and te Derivative Te Straigt Line An two points on te (,) plane wen joined form a line segment. If te line segment is etended beond te two points ten it is called a straigt

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Stability properties of a family of chock capturing methods for hyperbolic conservation laws

Stability properties of a family of chock capturing methods for hyperbolic conservation laws Proceedings of te 3rd IASME/WSEAS Int. Conf. on FLUID DYNAMICS & AERODYNAMICS, Corfu, Greece, August 0-, 005 (pp48-5) Stability properties of a family of cock capturing metods for yperbolic conservation

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization

Linearized Primal-Dual Methods for Linear Inverse Problems with Total Variation Regularization and Finite Element Discretization Linearized Primal-Dual Metods for Linear Inverse Problems wit Total Variation Regularization and Finite Element Discretization WENYI TIAN XIAOMING YUAN September 2, 26 Abstract. Linear inverse problems

More information

Chapter 4: Numerical Methods for Common Mathematical Problems

Chapter 4: Numerical Methods for Common Mathematical Problems 1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Notes on Multigrid Methods

Notes on Multigrid Methods Notes on Multigrid Metods Qingai Zang April, 17 Motivation of multigrids. Te convergence rates of classical iterative metod depend on te grid spacing, or problem size. In contrast, convergence rates of

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

A h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation

A h u h = f h. 4.1 The CoarseGrid SystemandtheResidual Equation Capter Grid Transfer Remark. Contents of tis capter. Consider a grid wit grid size and te corresponding linear system of equations A u = f. Te summary given in Section 3. leads to te idea tat tere migt

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

Finite Difference Methods Assignments

Finite Difference Methods Assignments Finite Difference Metods Assignments Anders Söberg and Aay Saxena, Micael Tuné, and Maria Westermarck Revised: Jarmo Rantakokko June 6, 1999 Teknisk databeandling Assignment 1: A one-dimensional eat equation

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Differential equations. Differential equations

Differential equations. Differential equations Differential equations A differential equation (DE) describes ow a quantity canges (as a function of time, position, ) d - A ball dropped from a building: t gt () dt d S qx - Uniformly loaded beam: wx

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves.

Integral Calculus, dealing with areas and volumes, and approximate areas under and between curves. Calculus can be divided into two ke areas: Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and minima problems Integral

More information

Notes on Neural Networks

Notes on Neural Networks Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat

More information

Derivatives. By: OpenStaxCollege

Derivatives. By: OpenStaxCollege By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator

Simulation and verification of a plate heat exchanger with a built-in tap water accumulator Simulation and verification of a plate eat excanger wit a built-in tap water accumulator Anders Eriksson Abstract In order to test and verify a compact brazed eat excanger (CBE wit a built-in accumulation

More information

INTRODUCTION TO CALCULUS LIMITS

INTRODUCTION TO CALCULUS LIMITS Calculus can be divided into two ke areas: INTRODUCTION TO CALCULUS Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

4.2 - Richardson Extrapolation

4.2 - Richardson Extrapolation . - Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Definition Let x n n converge to a number x. Suppose tat n n is a sequence

More information

Dedicated to the 70th birthday of Professor Lin Qun

Dedicated to the 70th birthday of Professor Lin Qun Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,

More information

An Adaptive Model Switching and Discretization Algorithm for Gas Flow on Networks

An Adaptive Model Switching and Discretization Algorithm for Gas Flow on Networks Procedia Computer Science 1 (21) (212) 1 1 1331 134 Procedia Computer Science www.elsevier.com/locate/procedia International Conference on Computational Science, ICCS 21 An Adaptive Model Switcing and

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

Continuity. Example 1

Continuity. Example 1 Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

The total error in numerical differentiation

The total error in numerical differentiation AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and

More information

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds. Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c) Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Travelling waves for a thin liquid film with surfactant on an inclined plane

Travelling waves for a thin liquid film with surfactant on an inclined plane IOP PUBLISHING Nonlinearity (009) 85 NONLINEARITY doi:0.088/095-775///006 Travelling waves for a tin liquid film wit surfactant on an inclined plane Vaagn Manukian and Stepen Scecter Matematics Department,

More information

The cluster problem in constrained global optimization

The cluster problem in constrained global optimization Te cluster problem in constrained global optimization Te MIT Faculty as made tis article openly available. Please sare ow tis access benefits you. Your story matters. Citation As Publised Publiser Kannan,

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

arxiv: v1 [math.oc] 18 May 2018

arxiv: v1 [math.oc] 18 May 2018 Derivative-Free Optimization Algoritms based on Non-Commutative Maps * Jan Feiling,, Amelie Zeller, and Cristian Ebenbauer arxiv:805.0748v [mat.oc] 8 May 08 Institute for Systems Teory and Automatic Control,

More information

MANY scientific and engineering problems can be

MANY scientific and engineering problems can be A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Generic maximum nullity of a graph

Generic maximum nullity of a graph Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n

More information

1. Introduction. We consider the model problem: seeking an unknown function u satisfying

1. Introduction. We consider the model problem: seeking an unknown function u satisfying A DISCONTINUOUS LEAST-SQUARES FINITE ELEMENT METHOD FOR SECOND ORDER ELLIPTIC EQUATIONS XIU YE AND SHANGYOU ZHANG Abstract In tis paper, a discontinuous least-squares (DLS) finite element metod is introduced

More information

arxiv: v1 [math.na] 3 Nov 2011

arxiv: v1 [math.na] 3 Nov 2011 arxiv:.983v [mat.na] 3 Nov 2 A Finite Difference Gost-cell Multigrid approac for Poisson Equation wit mixed Boundary Conditions in Arbitrary Domain Armando Coco, Giovanni Russo November 7, 2 Abstract In

More information

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM

EXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM SIAM J. SCI. COMPUT. Vol. 26, No. 3, pp. 821 843 c 2005 Society for Industrial and Applied Matematics ETENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

arxiv: v1 [math.na] 17 Jul 2014

arxiv: v1 [math.na] 17 Jul 2014 Div First-Order System LL* FOSLL* for Second-Order Elliptic Partial Differential Equations Ziqiang Cai Rob Falgout Sun Zang arxiv:1407.4558v1 [mat.na] 17 Jul 2014 February 13, 2018 Abstract. Te first-order

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Computing eigenvalues and eigenfunctions of Schrödinger equations using a model reduction approach

Computing eigenvalues and eigenfunctions of Schrödinger equations using a model reduction approach Computing eigenvalues and eigenfunctions of Scrödinger equations using a model reduction approac Suangping Li 1, Ziwen Zang 2 1 Program in Applied and Computational Matematics, Princeton University, New

More information

Department of Mathematical Sciences University of South Carolina Aiken Aiken, SC 29801

Department of Mathematical Sciences University of South Carolina Aiken Aiken, SC 29801 RESEARCH SUMMARY AND PERSPECTIVES KOFFI B. FADIMBA Department of Matematical Sciences University of Sout Carolina Aiken Aiken, SC 29801 Email: KoffiF@usca.edu 1. Introduction My researc program as focused

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA

The Krewe of Caesar Problem. David Gurney. Southeastern Louisiana University. SLU 10541, 500 Western Avenue. Hammond, LA Te Krewe of Caesar Problem David Gurney Souteastern Louisiana University SLU 10541, 500 Western Avenue Hammond, LA 7040 June 19, 00 Krewe of Caesar 1 ABSTRACT Tis paper provides an alternative to te usual

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

Multigrid Methods for Discretized PDE Problems

Multigrid Methods for Discretized PDE Problems Towards Metods for Discretized PDE Problems Institute for Applied Matematics University of Heidelberg Feb 1-5, 2010 Towards Outline A model problem Solution of very large linear systems Iterative Metods

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Continuity and Differentiability of the Trigonometric Functions

Continuity and Differentiability of the Trigonometric Functions [Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te

More information