Computational Methods in Optimal Control Lecture 8. hp-collocation
|
|
- Erica Carr
- 5 years ago
- Views:
Transcription
1 Computational Methods in Optimal Control Lecture 8. hp-collocation William W. Hager July 26, 2018
2 10,000 Yen Prize Problem (google this) Let p be a polynomial of degree at most N and let 1 < τ 1 < τ 2 <... < τ N < 1 be the Gauss quadrature points. Suppose that p( 1) = 0 and p (τ i ) 1 for all 1 i N. Show that p(τ i ) 2 for all 1 i N. This gives D 1 2: Let p P N with p( 1) = 0. Let p be the vector with components p i = p(τ i ), and let ṗ be the vector with components ṗ i = ṗ(τ i ). Since Dp = ṗ, we have p = D 1 ṗ. Since D 1 = max{ D 1 y : 1 y 1}, it follows that D 1 is the maximum value p(τ i ) over all polynomials for which p( 1) = 0 and ṗ(τ i ) 1. NOTE: the polynomial that achieves the maximum value for p(τ i ) is p(τ) = 1+τ.
3 Model Problem for hp Pseudospectral Method minimize C(x(1)) subject to ẋ(t) = f(x(t),u(t)), u(t) U, t Ω 0, x(0) = x 0. Ω 0 = [0,1], x 0 given, x(t) R n, U R m closed and convex, f : R n R m R n and C : R n R
4 hp Pseudospectral Idea Partition time domain Ω 0 into K subintervals and use different polynomial to approximate state on each subinterval. Require continuity across subintervals. Change variables so that x k (τ), τ [ 1,1] corresponds to x(t), t [t k 1,t k ]. x t _ k t h = k 1 1/2K = τ ε [ 1, 1] 2 Continuous: x k (τ) = x ( t k + 1/2 hτ) Discrete: x k ε P N t K = 1 t 0 t 0 = 0 t t k 1 t k t K t
5 Radau Collocation Points Radau quadrature points (zeros of P (1,0) N 1 plus τ N = 1): 1 < τ 1 < τ 2 <... < τ N = +1. Additional points in analysis: τ 0 = 1 Quadrature weights: ω i = 2(1+τ i ) N 1 (τ i)] 2, 1 i N 1, ω N = 2 N 2. [(1 τ 2 i )Ṗ(1,0) For every p P 2N 2 : 1 1 p(τ)dτ = ω i p(τ i ). i=1
6 Continuous and Discrete Problems on K Mesh Intervals Continuous: minimize C(x K (1)) subject to ẋ k (τ) = hf(x k (τ),u k (τ)), u k (τ) U, τ Ω, x k ( 1) = x k 1 (1), 1 k K. Discrete: minimize C(x K (1)) subject to ẋ k (τ i ) = hf(x k (τ i ),u ki ), u ki U, 1 i N, x k ( 1) = x k 1 (1), x k PN n, 1 k K.
7 Easy to Satisfy Continuity with Radau Lagrange interpolating polynomials: For 0 j N, Φ i (τ) = N j=0 j i If x k P N, then x k (τ) = { τ τ j 1 for j = i, Φ i (τ j ) = τ i τ j 0 otherwise. X kj (τ j )Φ j (τ), X kj = x k (τ j ). j=0 Continuity X k 1,N = X k0. Differentiation matrix D R N (N+1) ẋ(τ i ) = j=0 Φ j (τ i )x(τ j ) = D ij x(τ j ), D ij = Φ j (τ i ) j=0
8 Discrete Control Problem Hence, in terms of the state and control values X kj and U kj at the N collocation points on each of the K intervals, the discrete control problem can be formulated as minimize C(X KN ) subject to N j=0 D ijx kj = hf(x ki,u ki ), U ki U, 1 i N, X k0 = X k 1,N, 1 k K, where X 0N = X 0 is the starting condition.
9 Lagrangian and KKT Conditions K k=1 L(λ,X,U) = C (X KN )+ N i=1 λ ki,hf(x ki,u ki ) N j=0 D ijx kj + K k=1 λ k0,(x k 1,N X k0 ). Differentiating with respect to each of the variables: D i0 λ ki = λ k0, X k0 X kj X kn i=1 D ij λ ki = h x H(X kj,u kj,λ kj ), 1 j < N, i=1 D in λ ki = h x H(X kn,u kn,λ kn )+λ k+1,0, i=1 λ K+1,0 := C(X KN ), U ki u H(X ki,u ki,λ ki ) N U (U ki ).
10 First-order Optimality in Polynomial Setting Lemma. The multipliers λ k R Nn, 1 k K, satisfy the KKT conditions if and only if the polynomial λ k PN 1 n given by λ k (τ i ) = λ ki /ω i, 1 i N, satisfies the following conditions and λ k0 = λ k ( 1). λ k (τ i ) = h x H(x k (τ i ),u ki,λ k (τ i )), 1 i < N, λ k (1) = h x H(x k (1),u kn,λ k (1))+(λ k (1) λ k+1 ( 1))/ω N, where λ K+1 ( 1) := C (x K (1)) N U (u ki ) u H(x k (τ i ),u ki,λ k (τ i )), 1 i N. NOTE: The discrete λ is typically discontinuous at the mesh points.
11 D for Radau The differentiation matrix D for the space of polynomials of degree N 1 evaluated at τ i, 1 i N, is given by D NN = D NN + 1 ω N and D ij = ω j ω i D ji otherwise. In other words, if p is a polynomial of degree at most N 1 and if p R N is the vector with i-th component p i = p(τ i ), then (D p) i = ṗ(τ i ), 1 i N.
12 Proof If p P N and q P N 1 with p( 1) = 0, then integration by parts gives 1 1 ṗ(τ)q(τ)dτ = p(1)q(1) p(τ) q(τ)dτ. 1 1 Since ṗq and p q are polynomials of degree at most 2N 2, Radau quadrature is exact and we have w j ṗ j q j = p N q N j=1 w j p j q j, j=1 where p j = p(τ j ) and ṗ j = ṗ(τ j ). This yields (Wṗ) T q = p N q N (Wp) T q. Substituting ṗ = D 1:N p and q = D q gives p T D T 1:N Wq = p Nq N p T WD q.
13 Proof continued... Rearrange into the following form: p T (D T 1:N W+WD e N e T N )q = 0, where e N is the last column of I. Since this identity must be satisfied for all choices of p and q, we deduce that D = W 1 e N e T N W 1 D T 1:N W.
14 Proof of the Lemma Define Λ ki = λ ki /ω i for 1 i N, Λ k0 = λ k0, and D ij = ω jd ji /ω i. These substitutions in the KKT conditions yield D ij Λ kj = h x H(X ki,u ki,λ ki ), 1 i < N, j=1 D Ni Λ ki = [h x H(X kn,u kn,λ kn )+Λ k+1,0 /ω N ], (1) i=1 N U (U ki ) u H(X ki,u ki,λ ki ), 1 i N.
15 Proof continued Since the polynomial that is identically equal to 1 has derivative 0, we have D1 = 0, which implies that D 0 = N j=1 D j, where D j is the j-th column of D. Hence, Λ k0 = = λ ki D i0 = i=1 i=1 j=1 = Λ k+1,0 +h i=1 j=1 λ ki D ij = i=1 j=1 ω i D ij Λ kj ( ) ω i x H(X ki,u ki,λ ki ), i=1 ω j ( λki ω i where the discrete costate equations are used in the last step. ) (ω i D ij /ω j )
16 Let λ k PN 1 n satisfy λ kτ i = Λ ki, 1 i N. D ij Λ kj = λ k (τ i ), 1 i < N, and j=1 D Nj Λ kj = λ k (1) λ k (1)/ω N. (2) j=1 This substitution in ( ) yields Λ k0 = λ k (1) ω i λk (τ i ). Since λ k PN 2 n and N-point Radau quadrature is exact for these polynomial, we have 1 ω i λk (τ i ) = λ k (τ)dτ = λ k (1) λ k ( 1). i=1 1 i=1
17 Continued... Combine last two equations to obtain (1) + (2) + (3) yield the formula for λ k (1). Λ k0 = λ k ( 1) ((3))
18 Fundamental Differences: Gauss/hp Radau For Gauss, the entries of the matrices D and D satisfy D ij = D N+1 i,n+1 j, 1 i N, 1 j N. In other words, D 1:N = JD 1:NJ where J is the exchange matrix with ones on its counterdiagonal and zeros elsewhere. Hence, D 1 = (D ) 1. 20,000 Yen Prize Problem: Show that (D ) 1 2 and the rows of [W 1/2 D ] 1 have Euclidean length bounded by 2. In the hp-scheme, only need to have K is large enough, or equivalently h is small enough, that 2hd 1 < 1 and 2hd 2 < 1, where d 1 = sup t Ω 0 x f(x (t),u (t)) d 2 = sup t Ω 0 x f(x (t),u (t)) T
19 Discrete and continuous vectors X kj = x k (τ j), 0 j N, 1 k K, U kj = u k (τ j), 1 j N, 1 k K, Λ kj = λ k (τ j), 0 j N, 1 k K. X N kj = xn k (τ j), 0 j N, 1 k K, U N kj = un kj, 1 j N, 1 k K, Λ N kj = λn k (τ j), 0 j N, 1 k K. NOTE: λ k (1) = λ k+1 ( 1) but λn k (1) λn k+1 ( 1).
20 Main Theorem Theorem. Suppose (x,u ) is a local minimizer for the continuous problem with (x,λ ) PH η (Ω 0 ;R n ) for some η 2. If some smoothness and local convexity conditions hold, then for N sufficiently large or for h sufficiently small with N 2, the discrete problem has a local minimizer x N k Pn N and un k RmN, and an associated multiplier λ N k Pn N 1, 1 k K; moreover, there exists a constant c, independent of h, N and η, such that max { X N X, U N U, Λ N Λ } h p 1( ) c p 1 x N PH p (Ω 0 ) +h q 1( c q 1.5 λ N) PH q (Ω 0 ), where p = min(η,n +1) and q = min(η,n).
21 Example minimize subject to [x 2 (t)+u 2 (t)] dt ẋ(t) = u(t), u(t) 1, x(t) 2 e 1 e, t [0,1], 5e +3 x(0) = 4(1 e). Exact solution: 0 t 1 4 : x (t) = t e 1 e, u (t) = 1, t 3 4 : x (t) = et 1 e (1+e 3 2 2t ), u (t) = et 1 e (1 e 3 2 2t ), 3 4 t 1 : x (t) = 2 e 1 e, u (t) =
22 Error in Solution -4 log 10 (error) state control N
CONVERGENCE RATE FOR A RADAU HP COLLOCATION METHOD APPLIED TO CONSTRAINED OPTIMAL CONTROL
CONVERGENCE RATE FOR A RADAU HP COLLOCATION METHOD APPLIED TO CONSTRAINED OPTIMAL CONTROL WILLIAM W. HAGER, HONGYAN HOU, SUBHASHREE MOHAPATRA, AND ANIL V. RAO Abstract. For control problems with control
More informationCONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL
CONVERGENCE RATE FOR A RADAU COLLOCATION METHOD APPLIED TO UNCONSTRAINED OPTIMAL CONTROL WILLIAM W. HAGER, HONGYAN HOU, AND ANIL V. RAO Abstract. A local convergence rate is established for an orthogonal
More informationc 2018 Society for Industrial and Applied Mathematics
SIAM J. CONTROL OPTIM. Vol. 56, No. 2, pp. 1386 1411 c 2018 Society for Industrial and Applied Mathematics CONVERGENCE RATE FOR A GAUSS COLLOCATION METHOD APPLIED TO CONSTRAINED OPTIMAL CONTROL WILLIAM
More informationGeoffrey T. Huntington Blue Origin, LLC. Kent, WA William W. Hager Department of Mathematics University of Florida Gainesville, FL 32611
Direct Trajectory Optimization and Costate Estimation of Finite-Horizon and Infinite-Horizon Optimal Control Problems Using a Radau Pseudospectral Method Divya Garg Michael A. Patterson Camila Francolin
More informationConvergence of a Gauss Pseudospectral Method for Optimal Control
Convergence of a Gauss Pseudospectral Method for Optimal Control Hongyan Hou William W. Hager Anil V. Rao A convergence theory is presented for approximations of continuous-time optimal control problems
More informationDivya Garg Michael A. Patterson Camila Francolin Christopher L. Darby Geoffrey T. Huntington William W. Hager Anil V. Rao
Comput Optim Appl (2011) 49: 335 358 DOI 10.1007/s10589-009-9291-0 Direct trajectory optimization and costate estimation of finite-horizon and infinite-horizon optimal control problems using a Radau pseudospectral
More informationConvergence Rate for a Gauss Collocation Method Applied to Unconstrained Optimal Control
J Optim Theory Appl (2016) 169:801 824 DOI 10.1007/s10957-016-0929-7 Convergence Rate for a Gauss Collocation Method Applied to Unconstrained Optimal Control William W. Hager 1 Hongyan Hou 2 Anil V. Rao
More informationAutomatica. A unified framework for the numerical solution of optimal control problems using pseudospectral methods
Automatica 46 (2010) 1843 1851 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper A unified framework for the numerical solution of optimal
More informationAN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS
(Preprint) AAS 9-332 AN OVERVIEW OF THREE PSEUDOSPECTRAL METHODS FOR THE NUMERICAL SOLUTION OF OPTIMAL CONTROL PROBLEMS Divya Garg, Michael A. Patterson, William W. Hager, and Anil V. Rao University of
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationRiemann Integrable Functions
Riemann Integrable Functions MATH 464/506, Real Analysis J. Robert Buchanan Department of Mathematics Summer 2007 Cauchy Criterion Theorem (Cauchy Criterion) A function f : [a, b] R belongs to R[a, b]
More informationAutomatica. Pseudospectral methods for solving infinite-horizon optimal control problems
Automatica 47 (2011) 829 837 Contents lists available at ScienceDirect Automatica journal homepage: www.elsevier.com/locate/automatica Brief paper Pseudospectral methods for solving infinite-horizon optimal
More informationCHAPTER 10 Shape Preserving Properties of B-splines
CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially
More informationGauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems
AIAA Guidance, Navigation, and Control Conference 2-5 August 21, Toronto, Ontario Canada AIAA 21-789 Gauss Pseudospectral Method for Solving Infinite-Horizon Optimal Control Problems Divya Garg William
More informationLegendre Pseudospectral Approximations of Optimal Control Problems
Legendre Pseudospectral Approximations of Optimal Control Problems I. Michael Ross 1 and Fariba Fahroo 1 Department of Aeronautics and Astronautics, Code AA/Ro, Naval Postgraduate School, Monterey, CA
More informationGALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS
GALERKIN TIME STEPPING METHODS FOR NONLINEAR PARABOLIC EQUATIONS GEORGIOS AKRIVIS AND CHARALAMBOS MAKRIDAKIS Abstract. We consider discontinuous as well as continuous Galerkin methods for the time discretization
More informationClassical Mechanics in Hamiltonian Form
Classical Mechanics in Hamiltonian Form We consider a point particle of mass m, position x moving in a potential V (x). It moves according to Newton s law, mẍ + V (x) = 0 (1) This is the usual and simplest
More informationCHAPTER 3 Further properties of splines and B-splines
CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationScientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1
Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,
More informationarxiv: v2 [math.na] 15 Sep 2015
LEBESGUE COSTATS ARISIG I A CLASS OF COLLOCATIO METHODS WILLIAM W. HAGER, HOGYA HOU, AD AIL V. RAO arxiv:1507.08316v [math.a] 15 Sep 015 Abstract. Estimates are obtained for the Lebesgue constants associated
More informationDirect Optimal Control and Costate Estimation Using Least Square Method
21 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 3-July 2, 21 WeB22.1 Direct Optimal Control and Costate Estimation Using Least Square Method Baljeet Singh and Raktim Bhattacharya
More informationA note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations
A note on accurate and efficient higher order Galerkin time stepping schemes for the nonstationary Stokes equations S. Hussain, F. Schieweck, S. Turek Abstract In this note, we extend our recent work for
More informationOn the positivity of linear weights in WENO approximations. Abstract
On the positivity of linear weights in WENO approximations Yuanyuan Liu, Chi-Wang Shu and Mengping Zhang 3 Abstract High order accurate weighted essentially non-oscillatory (WENO) schemes have been used
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationAn hp-adaptive pseudospectral method for solving optimal control problems
OPTIMAL CONTROL APPLICATIONS AND METHODS Optim. Control Appl. Meth. 011; 3:476 50 Published online 6 August 010 in Wiley Online Library (wileyonlinelibrary.com)..957 An hp-adaptive pseudospectral method
More informationSolving Integral Equations by Petrov-Galerkin Method and Using Hermite-type 3 1 Elements
5 1 July 24, Antalya, Turkey Dynamical Systems and Applications, Proceedings, pp. 436 442 Solving Integral Equations by Petrov-Galerkin Method and Using Hermite-type 3 1 Elements M. Karami Department of
More informationSymmetric functions and the Vandermonde matrix
J ournal of Computational and Applied Mathematics 172 (2004) 49-64 Symmetric functions and the Vandermonde matrix Halil Oruç, Hakan K. Akmaz Department of Mathematics, Dokuz Eylül University Fen Edebiyat
More informationMatrices: 2.1 Operations with Matrices
Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,
More informationAnalysis Qualifying Exam
Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,
More informationLinear Algebra: Lecture notes from Kolman and Hill 9th edition.
Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices
More informationNonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability
p. 1/1 Nonlinear Systems and Control Lecture # 19 Perturbed Systems & Input-to-State Stability p. 2/1 Perturbed Systems: Nonvanishing Perturbation Nominal System: Perturbed System: ẋ = f(x), f(0) = 0 ẋ
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationNodal Discontinuous Galerkin Methods:
Jan S Hesthaven and Tim Warburton Nodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications LIST OF CORRECTIONS AND CLARIFICATIONS August 12, 2009 Springer List of corrections and clarifications
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationCHAPTER 10: Numerical Methods for DAEs
CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct
More informationLinear Algebra I Lecture 8
Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n
More informationAN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationCS 4424 Matrix multiplication
CS 4424 Matrix multiplication 1 Reminder: matrix multiplication Matrix-matrix product. Starting from a 1,1 a 1,n A =.. and B = a n,1 a n,n b 1,1 b 1,n.., b n,1 b n,n we get AB by multiplying A by all columns
More information0.1 Tangent Spaces and Lagrange Multipliers
01 TANGENT SPACES AND LAGRANGE MULTIPLIERS 1 01 Tangent Spaces and Lagrange Multipliers If a differentiable function G = (G 1,, G k ) : E n+k E k then the surface S defined by S = { x G( x) = v} is called
More informationNotes on Mathematics
Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................
More informationBasics of Calculus and Algebra
Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array
More informationCollocation and iterated collocation methods for a class of weakly singular Volterra integral equations
Journal of Computational and Applied Mathematics 229 (29) 363 372 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationFormula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column
Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationi=1 β i,i.e. = β 1 x β x β 1 1 xβ d
66 2. Every family of seminorms on a vector space containing a norm induces ahausdorff locally convex topology. 3. Given an open subset Ω of R d with the euclidean topology, the space C(Ω) of real valued
More informationRandomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints
Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints By I. Necoara, Y. Nesterov, and F. Glineur Lijun Xu Optimization Group Meeting November 27, 2012 Outline
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationPart IB Optimisation
Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationFinite difference method for elliptic problems: I
Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen
More informationSPECTRAL METHODS ON ARBITRARY GRIDS. Mark H. Carpenter. Research Scientist. Aerodynamic and Acoustic Methods Branch. NASA Langley Research Center
SPECTRAL METHODS ON ARBITRARY GRIDS Mark H. Carpenter Research Scientist Aerodynamic and Acoustic Methods Branch NASA Langley Research Center Hampton, VA 368- David Gottlieb Division of Applied Mathematics
More informationStochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)
Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the
More informationNumerische Mathematik
Numer. Math. (28) 9:43 65 DOI.7/s2-7-25-7 Numerische Mathematik The superconvergence of Newton Cotes rules for the Hadamard finite-part integral on an interval Jiming Wu Weiwei Sun Received: 2 May 27 /
More informationNonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability
Nonlinear Control Lecture # 6 Passivity and Input-Output Stability Passivity: Memoryless Functions y y y u u u (a) (b) (c) Passive Passive Not passive y = h(t,u), h [0, ] Vector case: y = h(t,u), h T =
More informationLecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.
Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference
More informationThe Adjoint of a Linear Operator
The Adjoint of a Linear Operator Michael Freeze MAT 531: Linear Algebra UNC Wilmington Spring 2015 1 / 18 Adjoint Operator Let L : V V be a linear operator on an inner product space V. Definition The adjoint
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationOptimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints
Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints
More information8. Constrained Optimization
8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer
More informationJuan Vicente Gutiérrez Santacreu Rafael Rodríguez Galván. Departamento de Matemática Aplicada I Universidad de Sevilla
Doc-Course: Partial Differential Equations: Analysis, Numerics and Control Research Unit 3: Numerical Methods for PDEs Part I: Finite Element Method: Elliptic and Parabolic Equations Juan Vicente Gutiérrez
More informationQuarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations
MATEMATIKA, 2011, Volume 27, Number 2, 199 208 c Department of Mathematical Sciences, UTM Quarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations 1 E. Aruchunan
More informationLecture 10 Polynomial interpolation
Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationCOURSE Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method
COURSE 7 3. Numerical integration of functions (continuation) 3.3. The Romberg s iterative generation method The presence of derivatives in the remainder difficulties in applicability to practical problems
More informationTonelli Full-Regularity in the Calculus of Variations and Optimal Control
Tonelli Full-Regularity in the Calculus of Variations and Optimal Control Delfim F. M. Torres delfim@mat.ua.pt Department of Mathematics University of Aveiro 3810 193 Aveiro, Portugal http://www.mat.ua.pt/delfim
More informationA brief introduction to finite element methods
CHAPTER A brief introduction to finite element methods 1. Two-point boundary value problem and the variational formulation 1.1. The model problem. Consider the two-point boundary value problem: Given a
More informationA Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning
Int. J. Contemp. Math. Sciences, Vol. 2, 2007, no. 22, 1097-1106 A Numerical Solution of the Lax s 7 th -order KdV Equation by Pseudospectral Method and Darvishi s Preconditioning M. T. Darvishi a,, S.
More informationEE363 homework 2 solutions
EE363 Prof. S. Boyd EE363 homework 2 solutions. Derivative of matrix inverse. Suppose that X : R R n n, and that X(t is invertible. Show that ( d d dt X(t = X(t dt X(t X(t. Hint: differentiate X(tX(t =
More informationDeterminants of Partition Matrices
journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised
More informationLecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1
More informationPartititioned Methods for Multifield Problems
Partititioned Methods for Multifield Problems Joachim Rang, 22.7.215 22.7.215 Joachim Rang Partititioned Methods for Multifield Problems Seite 1 Non-matching matches usually the fluid and the structure
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationMañé s Conjecture from the control viewpoint
Mañé s Conjecture from the control viewpoint Université de Nice - Sophia Antipolis Setting Let M be a smooth compact manifold of dimension n 2 be fixed. Let H : T M R be a Hamiltonian of class C k, with
More informationMathematics 13: Lecture 10
Mathematics 13: Lecture 10 Matrices Dan Sloughter Furman University January 25, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 10 January 25, 2008 1 / 19 Matrices Recall: A matrix is a
More informationGraph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems
Graph Coarsening Method for KKT Matrices Arising in Orthogonal Collocation Methods for Optimal Control Problems Begüm Şenses Anil V. Rao University of Florida Gainesville, FL 32611-6250 Timothy A. Davis
More information(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by
1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How
More informationMatrix Algebra & Elementary Matrices
Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationContents. D. R. Wilkins. Copyright c David R. Wilkins
MA232A Euclidean and Non-Euclidean Geometry School of Mathematics, Trinity College Michaelmas Term 2017 Section 5: Vector Algebra and Spherical Trigonometry D. R. Wilkins Copyright c David R. Wilkins 2015
More information5. Vector Algebra and Spherical Trigonometry (continued)
MA232A Euclidean and Non-Euclidean Geometry School of Mathematics, Trinity College Michaelmas Term 2017 Section 5: Vector Algebra and Spherical Trigonometry David R. Wilkins 5.1. Vectors in Three-Dimensional
More informationNodal Discontinuous Galerkin Methods:
Jan S Hesthaven and Tim Warburton Nodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications LIST OF CORRECTIONS AND CLARIFICATIONS September 6, 2010 Springer List of corrections and
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationAPPLICATION OF CANONICAL TRANSFORMATION TO GENERATE INVARIANTS OF NON-CONSERVATIVE SYSTEM
Indian J pure appl Math, 39(4: 353-368, August 2008 c Printed in India APPLICATION OF CANONICAL TRANSFORMATION TO GENERATE INVARIANTS OF NON-CONSERVATIVE SYSTEM ASHWINI SAKALKAR AND SARITA THAKAR Department
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More informationAn efficient hybrid pseudo-spectral method for solving optimal control of Volterra integral systems
MATHEMATICAL COMMUNICATIONS 417 Math. Commun. 19(14), 417 435 An efficient hybrid pseudo-spectral method for solving optimal control of Volterra integral systems Khosrow Maleknejad 1, and Asyieh Ebrahimzadeh
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationProblem 1 Cost of an Infinite Horizon LQR
THE UNIVERSITY OF TEXAS AT SAN ANTONIO EE 5243 INTRODUCTION TO CYBER-PHYSICAL SYSTEMS H O M E W O R K # 5 Ahmad F. Taha October 12, 215 Homework Instructions: 1. Type your solutions in the LATEX homework
More informationLecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2
More informationAn Introduction to Numerical Continuation Methods. with Application to some Problems from Physics. Eusebius Doedel. Cuzco, Peru, May 2013
An Introduction to Numerical Continuation Methods with Application to some Problems from Physics Eusebius Doedel Cuzco, Peru, May 2013 Persistence of Solutions Newton s method for solving a nonlinear equation
More informationEN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018
EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal
More informationThe continuity method
The continuity method The method of continuity is used in conjunction with a priori estimates to prove the existence of suitably regular solutions to elliptic partial differential equations. One crucial
More informationLower Tail Probabilities and Related Problems
Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued
More informationGALERKIN AND RUNGE KUTTA METHODS: UNIFIED FORMULATION, A POSTERIORI ERROR ESTIMATES AND NODAL SUPERCONVERGENCE
GALERKIN AND RUNGE KUTTA METHODS: UNIFIED FORMULATION, A POSTERIORI ERROR ESTIMATES AND NODAL SUPERCONVERGENCE GEORGIOS AKRIVIS, CHARALAMBOS MAKRIDAKIS, AND RICARDO H. NOCHETTO Abstract. We unify the formulation
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More information