Calculating Forward Rate Curves Based on Smoothly Weighted Cubic Splines

Size: px
Start display at page:

Download "Calculating Forward Rate Curves Based on Smoothly Weighted Cubic Splines"

Transcription

1 Calculating Forward Rate Curves Based on Smoothly Weighted Cubic Splines February 8, 2017 Adnan Berberovic Albin Göransson Jesper Otterholm Måns Skytt Peter Szederjesi

2 Contents 1 Background 1 2 Operational analysis Goals Limitations Mathematical model Analytical solution Identifying the basic variables Linearising g e f Regularisation Evaluation 4.1 Procedure LSExp Vs. Spline-approximation of LSExp and f Principal component analysis PCA Results and analysis References 18 i

3 1 Background This project focuses on calculating forward rate curves with an approximation of the method LSExp Least Square, Exponential Weighting, as described in [1]. By approximating LSExp with smoothly weighted cubic splines for the yield curve and a least square tted third degree polynomial for the weight function, an analytical simplication of the regularisation is derived which makes it possible to avoid numerical approximations. The yield curve is t to quoted OIS-instruments from the Swedish market. The general optimisation model in [1] is given by min f,x,z e,z b s.t. hf zt e E e z e zt b E bz b g e f + F e z e = b e g l g b f + F b z b g u f f l f F, 1 where hf is a regularisation of forward rate curve which implies that the yield curve gets a desired property. The two remaining terms in the objective function penalises pricing errors for exact prices, z e, respectively prices limited to a price gap, z b. The functions g e f and g b f transforms forward rates to market prices for instruments that require unique prices and instruments whose prices are within a price gap respectively, and b e, b l and b u are the observed market prices. F e and F b are diagonal matrices that decides which instruments are allowed to deviate from market prices, and E e and E b are diagonal matrices with penalty for eventual deviations. f l gives a lower bound for the yield curve and the condition f F allowed the model to limit the curve to a given class of functions. Blomvall and Ndengo suggets the regularisation hf = 1 2 Tn for the roughness in the forward rate curve [1]. T 0 w 1 tf t 2 dt Tn T 0 w 2 tf t 2 dt. 2 The weight functions in the regularisation, w 1 t and w 2 t, intends to decide how much of the derivatives that contributes to the regularisation hf. In the method LSExp w 1 t is set to 0 and w 2 t is set to { { w 2 exp t th t = 365 ln δ} om t t h, 3 1 om t > t h where δ is the rate of information decay and is suggested in [1] to be set to δ = 2, and t h decides when to stop weighting the derivatives and is suggested to be set to t h = 730 days. Let T k = { T0 k, T 1 k,..., T m} k, with 0 = T k 0 T1 k... T m, k be the set of maturities for all contracts that the optimisation takes into consideration. The relation between the spot rate for time point t i, r ti, and the quoted xed leg for an OIS with maturity Tj k, is then given by F j = T k j 1 e r T j k Nj. 4 i=1 tf i,j e rt i,j t i,j In 4 t f i,j is the time since the last settlement, t i,j is the time until interest payment i from the time of valuation and N j is the number of payments, all for an OIS with maturity Tj k. The discount factor is 1

4 calculated with the spot rate for the corresponding time horizon, r ti, which itself can be calculated from the continuous forward rate ft through where r ti t 0 is the interest until time point t i observed at time point t 0. r ti t 0 = 1 ti ftdt 5 t i t 0 t 0 2 Operational analysis Under the assumption that the project should be performed with the given resources and the intended time span, some limitations has to be made. Additionally the goal with the project is also discussed as well as the pros and cons with the results. 2.1 Goals The purpose of the project is to analyse whether or not a spline approximation of the method LSExp yields reasonable yield curves that can be used for pricing interest rate derivatives, risk management of these and eventually as an interpolation tool for OTC-traded OIS. Whether or not the results are reasonable is analysed in two parts. First o, the yield curves principal components are analysed, and the second part compares the yield curves as well as the principal components from this method, with the corresponding results yielded from the LSExp method. This is done in order to see how well the spline approximation behaves qualitatively as well as how reasonable the result is. 2.2 Limitations In the mathematical model of the optimisation problem in 1 the requirement that takes the price gap in consideration will be ignored, and the problem will then only be solved using the equality condition, where mid prices are used as exact prices. Additionally, the inequality condition with a lower limit will also be ignored, in order to allow for any negative forward rate. As a consequence of this, the penalty term for the price dierence in price gaps will be eliminated from the objective function. Furthermore, g e f is a non linear function, which contributes to a computationally intensive and complex optimisation problem. For this reason g e f will be linearised in order to yield an analytical solution given an operating point. The matrix F e will be required to be non singular. The model is built under the assumption that all nodes knot points as well as end points for the splines of the forward rate structure, f i, to coincide with the maturities for the OIS contracts, T k. Therefore, the set T s T k with Ti s T s for i = 0,..., n, where n = T s, and 0 = T0 s < T 1 s <... < T n s is dened, which denotes all nodes that build the splines, where the nodes in this case coincide with the maturities of the contracts. Furthermore, w 2 t in 3 is approximated with a third degree polynomial for t t h if t h 730; if t h > 730 more splines might be needed, but these cases are not researched. The time t h which denotes when a weighting of derivatives in the regularisation should cease does not need to coincide with any maturity of the contracts. No comparison between other models for estimation of forward rate curves will be performed, with the exception of LSExp as mentioned initially. 2

5 3 Mathematical model Under the limitations discussed in section 2.2, 1 is simplied as min f,z e s.t. hf zt e E e z e g e f + F e z e = b e f F, 6 where b e is set to mid prices, F e is the identitiy matrix, since pricing errors are allowed for all instruments, and E e = pi, where the design parameter p decides how much pricing errors are penalised. The forward rate structure will be approximated with splines, f i t = a f i t T s i 3 + b f i t T s i 2 + c f i t T s i + d f i i I 7 in the intervals [Ti s, T i+1 s ], where I is the index set with all pieces of the yield curve that is built by all the splines, where the elements are ordered integers 1 such that they correspond to the contracts that is used to decide the knot points for the splines i.e d.v.s I = {0, 1,..., n 1} if the forward rate curve is built by n + 1 quoted instruments, which means that F will be the set of all smoothly weighted cubic splines. For cubic splines the following has to hold f i Ti+1 s = f i+1ti+1 s f i T i+1 s = f i+1 T i+1 s, i I \ {n 1} 8 f i T i+1 s = f i+1 T i+1 s in order to guarantee continuity and smoothness. The regularisation hf in 2 is solved analytically. The weight function w 2 t in 3 is approximated with a cubic spline for t t h and is given by w 2 t = { p w t om t t h, 1 om t > t h, where p w t = a w t 3 + b w t 2 + c w t + d w. For choices of t h and δ in conjunction with [1], it is enough that p w is a cubic spline, with respect to the limitations. But for other choices of parameters it might be more suitable to use more splines, tentatively the same knot points could be used as for the forward rate curve f in order to make the maths easier Analytical solution The choice to use splines for ft causes the regularisation in 2 to become quadratic in the spline coecients a 0,..., d 0,..., a n,..., d n. By linearising g e f and substituting for z in the objective function, the second term becomes quadratic in the coecients as well, which results in an optimisation problem with a quadratic objective function and the linear constraints 8, which are solved with respect to osets in the spline coecients, a 0,..., d 0,..., a n,..., d n. The yielded optimum yields a new operation point for the linearisation, where the problem is solved again. 3

6 Let f = a 0,..., d 0,..., a n,..., d n T denote the vector with the spline coecients that build f. By setting f = f + f, g e f is linearised according to g e f; T s g e f; T s + g e f; T s T f, 10 where g e f; T s T is the jacobian of g e. Since the constraints in 8 constitutes an underdetermined system of equations, the variables is split up into basic as well as nonbasic variables. In order to make the notation easier, the permutation matrix P is used, which allowed the denitions [ ] f f = P f = B 11 f N and x = P f = [ xb x N ], where index B and N indicates basic as well as nonbasic variables. With the linearisation in 10, the rst constraint in 6 can be rewritten as z e = Fe b e g e f; T s g e f; T s T x = Ax a, 13 [ where A = [A B A N ] = F g e f; T s T B F g e f; ] T s T N with g e f; T s = P g e f; T s, and a = F g e f; T s b e. The constraints in 8 can trivially be expressed as B f + x = b 14 for some B = [B B B N ] and b, and the optimisation model in 6 is then rewritten as where denes the quadratic form of the regularisation. 1 min x,z e 2 f + x T H f + x zt e E e z e s.t. z e = Ax a B f + x = b, [ ] HBB H H = BN H NB From x B is expressed in terms of x N according to Substituting this in 13 yields H NN B f + x = B B f B + x B + BN f N + x N = b B B x B = b B N f N + x N f B x B = B B b BN f N B B f B B B B Nx N z e = Ax a = A B x b + A N x N a = A B B B b BN fn B B fb B B B Nx N + AN x N a = 18 A N A B B B B N xn + A B B B b BN fn B B fb a 4

7 and elimination of z e in the objective function in 15 results in ze T Ez e = Ax a T E Ax a = x T N AN A B B B B T N E AN A B B B B N xn + 2 A B B T B b BN f N B B f B a E AN A B B B B N xn + A B B T B b BN fn B B fb a E AB B B b BN fn B B fb a for the penalty term. The regularisation is expanded according to 19 f + x T H f + x = x T Hx + 2 f T Hx + f T H f, 20 where x T Hx = x T BH BB x B + x T NH NB x B + x T BH BN x N + x T NH NN x N and = / HNB T / = H BN = x T B H BB x B + 2x T BH BN x N + x T NH NN x N 17 = B B b BN fn B B fb B B B T Nx N HBB B B b BN f N B B f B B B B T Nx N HBN x N + 2 B B + x T NH NN x N = x T N + 2 B B B + B B B B N B B T HBB B B B N 2 B B B T N HBN + H NN T b BN f N B B f B HBN b BN f N B B f B T HBB B B B N x N b BN fn B B fb T HBB B B b BN fn B B fb B B B Nx N x N b BN fn B B fb 2 f T Hx = 2 f B T H BB x B + 2 f N T H NB x B + 2 f B T H BN x N + 2 f N T H NN x N = 2 f B T H BB B B b BN f N B B f B B B B Nx N + 2 f N T H NB B B b BN f N B B f B B B B Nx N + 2 f B T H BN x N + 2 f N T H NN x N = 2 f B T HBN H BB B B B N + f N T HNN H NB B B B N xn + 2 f B T H BB B B b BN fn B B fb + 2 f N T H NB B B b BN f N B B f B. By combining 19, 20, 21 and 22 the objective function in 15 is rewritten as Φ = 1 2 x N Hx N + h T x N + d, 23 where H = B B B T N HBB B + A N A B B B B N B B N 2 B B B T N HBN + H NN 24 T E AN A B B B B N 5

8 and h = B T B b BN f N B B f B HBN B T B b BN fn B B fb HBB B + f B T HBN H BB B B B N HNN H NB B + f T N + A B B B B B N B B N b BN fn B B fb a T E AN A B B B B N, and where d is independent of x N. The optimisation problem can now, starting from the initial guess f, be solved by nding the solution to the linear system of equations 25 Φ = Hx N + h = The set of spline coecients that dene ft, f is yielded by [ ] f = f + P T xb, 27 x N and is used as the starting point in the next iteration of the solution process: f k+1 = f k. What remains is to nd B, b, g e f and H Identifying the basic variables The requirement for continuity and smoothness in the knot points between the splines 8 is rewritten to get a zero sum. With f i and f i + 1 substituted with the spline expression in 7 and inverted row order, the linear system of equations for knot point Ti+1 s yields 3a i Ti+1 s T i s + b i b i+1 = 0 3a i Ti+1 s T i s2 + 2b i Ti+1 s T i s + c i c i+1 = 0 28 a i Ti+1 s T i s3 + 2b i Ti+1 s T i s2 + c i Ti+1 s T i s + d i d i+1 = 0 where only the contribution from the constant term in the right hand side in 8 is kept, the rest of the terms becomes zero. Let β i R 3 4n, where n is the number of splines yield 28 through β i f = 0. The requirements for all the splines F, is then given by β f + f = b 29 where b = 0 and β = β 0 β 1. β n 2 R3n 4n. 30 If f B = b 1, c 1, d 1, b 2, c 2, d 2,..., b n, c n, d n T, row operations in β i yields the modied matrix 6

9 B i = β i,1 β i,1 β i,2 2T i+1 T i β i,1 β i,2, i = 1, 2,..., n 2, 31 β i,3 T i+1 T i 2 β i,1 T i+1 T i β i,2 β i,3 where β i,j indicates row j in matrix β i. From B i B = β 0 B 1. B n 2 R3n 4n, 32 is yielded which denes a linear system of equations that guarantees continuity and smoothness in the knot points of the splines. The structure in B is such that all basic variables, f B, is expressed in only nonbasic variables f N. consequence of this is that B can be split up in B B and B N with a permutation according to A B = BP T = [ B B B N ] 33 where B B = I R 3n 3n och B N R 3n n+3. The choice of basic and nonbasic variables is based on the structure for β which allows the easy calculation of B. The permutation matrix P is used widely to order the spline coecients starting with basic variables and ending with nonbasic variables. Permutating the original constraint 29 yields the rewritten constraint B f + x = B B f B + x B + B N f N + x N = b 34 which is split into two terms: basic and nonbasic variables, with b = 0. This yields 14 which is used for eliminating variables in the objective function Linearising g e f The rst constraint in the simplied optimisation model, g e f + F e z e = b e, needs to have a dened function g e. The function takes the entire forward rate structure and transforms it to the corresponding rate for the xed leg for an OIS, in unity with earlier notation. g e,j f = F j 4 = T k j 1 e r T j k Nj i=1 tf i,j e rt i,j t i,j = Γ jf Π j f, 35 When the spot rate is calculated from the forward rate structure, all the splines contribute to a sum of integrals, where each spline f i t comprises a part of the entire forward rate structure ft. Then the 7

10 following relation between the forward rate structure and the spot rate is yielded, rt k j 0 5 = 1 Tj k ftdt = 1 k T j 0 7 = 1 k T j i {i:t s i T j k } k T j i {i:ti s T j k } τi T i f i tdt [ a f i 4 τ i T s i 4 + bf i 3 τ i T s i 3 + cf i 2 τ i T s i 2 + d f i τ i T s i ] 36 where τ i = min { T s i+1, T j k}. The partitioning of the forward rate curve in dierent cubic splines entails a sum of all active splines up until the maturity of the contract T k j. In order to express the jacobian g e f; T s T, rst the numerator, Γ j f, and denominator, Π j f, in 35, is split apart, whereupon partial derivatives of these with respect to each spline coecient is calculated according to and Γ j f a i Γ j f b i Γ j f c i Γ j f d i = e r T k j = e r T k j = e r T k j = e r T k j T k j 1 T k j 1 T k j 1 T k j 4 τ i Ti s4, om Tj k T i s, 0 annars 3 τ i Ti s3, om Tj k T i s, 0 annars 2 τ i Ti s2, om Tj k T i s, 0 annars τ i T s i, om T k j T s i, 0 annars 37 Π j f a i Π j f b i Π j f c i Π j f d i = k {k:t k i,j T i s} t i,je r t k t i,j i,j 1 4 τ i T s = k {k:t k i,j T i s} t i,je r t k t i,j i,j 1 3 τ i T s = k {k:t k i,j T i s} t i,je r t k t i,j i,j 1 2 τ i T s i 4 i 3 i 2 = k {k:t k i,j T i s} t i,je r t k t i,j i,j τ i Ti s4, 38 where N j is the number of cash ows under the term of the contract, t i,j is the time until cash ow i, t i,j is the time since the previous cash ow, and where now τ i = min { Ti+1 s, t i,j}. For contracts with maturities longer than one year t i,j = 1, under the assumption that cash ows occur yearly for all OIS. Finally all the elements in the jacobian is calculated with the quotient rule for derivatives according to where g e f; T s = g j a i = g 1 a 0. g 1 d n g m a g m d n, 39 Γ j f a i Π j f Γ j f Π jf a i Π j f Note that the elements in the nal matrix are transposed with respect to a jacobian. 8

11 3.1.3 Regularisation In order to nd an explicit expression for the regularisation, the weight function w 2 t in 9 is least square tted to a cubic polynomial, and the forward rate curve is modelled with cubic splines f i t according to p w t = a w t 3 + b w t 2 + c w t + d w f i t = a f i t T i s 3 + b f i t T i s 2 + c f i t T i s + d f i f it = 3a f i t T i s 2 + 2b f i t T i s + c f i f i t = 6a f i t T i s + 2b f i f i t 2 = 36a f i 2 t Ti s a f i bf i t T i s + 4b f i p w tf i t2 is determined and split into four terms where the distributive property has been used to break out the coecients for p w t p w tf i t 2 = q a i t + q b i t + q c i t + q d i t q a i t = a w 36t 3 t T s i 2 a f i t 3 t T s i a f i bf i + 4t3 b f i 2 q b i t = b w 36t 2 t T s i 2 a f i t 2 t T s i a f i bf i + 4t2 b f i 2 q c i t = c w 36tt T s i 2 a f i tt T s i a f i bf i + 4tbf i 2 q d i t = d w 36t T s i 2 a f i t T s i a f i bf i + 4bf i 2 42 which together with 3 and hf = 1 2 n T s i+1 i=0 T s i w 2 tf i t 2 dt 43 yields n 2hf = i=0 T s i+1 T s i ι w 2 tf i t 2 dt = i=0 + T s i+1 T s i th p w tf i t 2 dt + p w tf ι t 2 dt Tι s T s ι+1 t h f ι t 2 dt + n i=ι+1 T s i+1 T s i f i t 2 dt 44 where ι = max{i : T i t h }, since the second derivative in the regularisation stops being weighted with w 2 t when t > t h. This is split into two sub problems: and f T i Q 1 i, t 2 f i = p w tf i t 2 dt 45 f T i Q 2 i, t 2 f i = f i t 2 dt 46 9

12 where f i = a i, b i, c i, d i T are the coecients for spline i that constructs the forward rate curve. For equation 45 the following holds p w tf i t 2 dt = = qi a t + qi b t + qi c t + qi d t dt q a i tdt + q b i tdt + q c i tdt + q d i tdt = f T i Q a i, t 2 f i + f T i Q b i, t 2 f i + f T i Q c i, t 2 f i + f T i Q d i, t 2 f i 47 = f T i Q a i, t 2 + Q b i, t 2 + Q c i, t 2 + Q d i, t 2 f i = f T i Q 1 i, t 2 f i which are calculated termwise. All four integrals in 47 are quadratic forms which is shown below. [ qi a tdt = a w 36a f 1 i 2 6 t6 2 5 T i s t T i s 2 t a f 1 i bf i 5 t5 1 ] 4 T i s t 4 + 4b f 1 t2 i 2 4 t4 t 1 = a w a f i 2 6t 6 2 t t5 2 t 5 1Ti s + 9t 4 2 t 4 1Ti s 2 + a f i bf i 24 5 t5 2 t 5 1 6t 4 2 t 4 1Ti s + b f i 2 t 4 2 t which by observation is a quadratic form, which is why the ansatz in the two last rows in 47 are made. Similar calculations are performed for the remaining integrals: t2 qi b tdt = b w a f 36 i 2 5 t5 2 t t 4 2 t 4 1Ti s + t 3 2 t 3 1Ti s 2 q c i tdt = c w q d i tdt = d w + a f i bf i 6t 4 2 t 4 1 8t 3 2 t 3 1Ti s + b f 4 i 2 t t 3 1 a f i 2 9t 4 2 t t 3 2 t 3 1T s i + 18t 2 2 t 2 1T s i 2 + a f i bf i 8t 3 2 t 3 1 t 2 2 t 2 1Ti s + b f i 2 2 t 2 2 t 2 1 a f i 2 t 3 2 t t 2 2 t 2 1T s i + 36t 2 T s i 2 + a f i bf i t 2 2 t t 2 Ti s + b f i 2 4 t

13 The expression above is written on a quadratic form with the following matrices 6t 6 2 t t5 2 t5 1 T i s + 9t4 2 t4 1 T i s2 5 t5 2 t5 1 3t4 2 t4 1 T s i 0 0 Q a i, t 2 = a w 5 t5 2 t5 1 3t4 2 t4 1 T i s t 4 2 t t5 2 t5 1 18t4 2 t4 1 T i s + t3 2 t3 1 T i s2 3t 4 2 t4 1 4t3 2 t3 1 T s i 0 0 Q b i, t 2 = b w 3t 4 2 t4 1 4t3 2 t3 1 T i s 4 3 t3 2 t t 4 2 t4 1 24t3 2 t3 1 T i s + 18t2 2 t2 1 T i s2 4t 3 2 t3 1 6t2 2 t2 1 T s i 0 0 Q c i, t 2 = c w 4t 3 2 t3 1 6t2 2 t2 1 T i s 2t 2 2 t t 3 2 t3 1 36t2 2 t2 1 T i s + 36t 2 Ti s2 6t 5 2 t5 1 t 2 T s i 0 0 Q d i, t 2 = d w 6t 2 2 t2 1 t 2 Ti s 4t Q 1 i, t 2 = Q a i, t 2 + Q b i, t 2 + Q c i, t 2 + Q d i, t 2 50 where zeros has been added in order to satisfy the form of f i by expanding to 4 4 matrices. It remains to calculate Q 2 i, t 2 in order to nally yield the regularisation hf in matrix form. According to 46 and 41 we have f i t 2 dt = a f i 2 t 3 2 t t 2 2 t 2 1T s i + 36t 2 T s i 2 + a f i bf i t2 2 t t 2 T s i + b f i 2 4t 2 51 which yields t 3 2 t3 1 36t2 2 t2 1 T i s + 36t 2 Ti s2 6t 2 2 t2 1 t 2 T s i 0 0 Q 2 i, t 2 = 6t 2 2 t2 1 t 2 Ti s 4t and the regularisation is then hf = 1 2 ι i=0 f T i Q 1 i T s i, T s i+1f i + f T ι Q 1 ι T s ι, t h f ι + f T ι Q 2 ι t h, T s ι+1f ι + n i=ι+1 f T i Q 2 i T s i, T s i+1f i 53 which on matrix form is written as H = P diag Q 1 0T0 s, T1 s,..., Q 1 ιtι, s Tι s, Q 1 ι Tι s, t h + Q 2 ι t h, T ι+1, Q 2 ι+1tι+1, s Tι+2, s..., Q 2 ntn, s Tn s P T 54 where diaga 1,..., A n creates a block diagonal of matrices. 11

14 4 Evaluation The result of the produced forward rate curve with the simplied model will be analysed with the forward rate curves' principal components in order to evaluate how reasonable they are. Additionally, a comparison of the results from this model and the results from LSExp will be made. Depending on what the forward rate curve will be used for, dierent types of evaluations could be of interest. One such example is to use the forward rate curve as a tool for interpolation or extrapolation for pricing OIS with maturities that are not quoted on the market. In order to evaluate this, it could be of additional interest to price OIS with such maturities, and compare the prices yielded with observed market quotes. 4.1 Procedure In order to perform the evaluation, a framework has been implemented in order to analyse the results given the methods below which has been chosen to evaluate the yielded forward rate curves LSExp Vs. Spline-approximation of LSExp and f A pre existing implementation of LSExp was used in order to compare the results. LSExp yields reasonable forward rate curves [1], and in order to evaluate the results from the project, the yielded forward rate curves will be compared to these by examining the structure as well as principal components. The methodology is discussed in section Principal component analysis PCA The most common systematic risks are shift, twist and buttery; these risk factors have the greatest impact on changes in interest rates for longer maturities [1]. The risk factors that aect the forward rate curve in the short end is usually explained by principal components with less impact on the total variance. This is due to the fact that there exists a lot of information on forward rates within a short period in the future, but after that period there is almost no information on how the forward rates will move, which is why changes in these rates are mainly aected by systematic risks. In order to perform a PCA on forward rate curves, the daily changes in interest rates has to be calculated for each day, for each maturity, even the maturities that a contract does not exist for in order to yield a large vector that is seemingly continuous. Then the covariance matrix C is calculated for these changes in interest rates over time. Since covariance matrices are always positively semidenite as well as symmetric, an eigenvaluedecomposition can be performed according to C = UΛU T, where U contains eigenvectors u k to C, with corresponding eigenvalue λ k in the diagonal matrix Λ. Since the eigenvectors for C are orthogonal given that all the eigenvalues are unique, these can be used to study independent changes in the forward rate curve. Each eigenvector u k explains one type of change, and its impact on the total variation in the λ forward rate curve is described by the eigenvalue according to k i λ i. 4.2 Results and analysis Below is presented the maturities for the contracts that are included in the models, as well as the dierent parameter congurations for spline knowpoints and penalties for pricing errors. A unique conguration of

15 knot points is indexed with a number under U in table 2, in order to keep track on the dierent data les during calculation of all the data that is required for the evaluation. Table 1: All maturities T k Maturities Table 2: Parameter conguration used for the evaluation. U Spline knot points p With a penalty parameter of p = 100 the constructed forward rate curves are considerably more curved in the long end compared to those with p = By examining the dierences between priced OIS with the constructed forward rate curves and the quoted prices, it can be conrmed that p = 100 causes a much greater pricing error than p = By examining the pricing error over time, it was observed that contracts with very short maturities under certain days were suddenly priced entirely dierently from market quotes, even for p = This pricing error was amplied when knot points for 3/ and 6/ was not taken into the consideration. The gures below initially presents some priced OIS with constructed forward rate curves in relation to market quotes, and thereafter forward rate curves with dierent parameter congurations are presented. a p = 1, 20 years, Spline conguration 5. b p = 1000, 20 years, Spline conguration 1. 13

16 Figure 1: Forward rate curves. a p = 1, 20 years, Spline conguration 5. b p = 100, 30 years, Spline conguration 1. Figure 2: Forward rate curves. a p = 1000, 30 years, Spline conguration 2. b p = 1000, 20 years, Spline conguration 3. Figure 3: Forward rate curves. 14

17 a p = 1000, 20 years, Spline conguration 2. b p = , 20 years, Spline conguration 5. Figure 4: Forward rate curves. LSExp with a penalty for pricing error at p = 1000 gives generates very unstable forward rate curves, which could depend on the fact that the model discretises the forward rate curve in a huge amount of points, where the pricing error is penalised in each discretised point. In the approximative model, the degrees of freedom are considerably fewer, and the pricing error is penalised in fewer points. With p = in the approximative model a similar behaviour is yielded in the forward rate curve as for p = 1000 in LSExp. With p = 1 the forward rate curve as well as the principal components become very unstable, with no clear shift, twist nor buttery. In the gures below dierent congurations of principal components are presented for dierent parameter congurations. Figure 5: Comparison between Shift, Twist and Buttery between LSExp1E0 and LSExpSplines1E0. 15

18 Figure 6: Comparison between Shift, Twist and Buttery between LSExp1E0 and LSExpSplines1E2. Figure 7: Comparison between Shift, Twist and Buttery between LSExp1E0 and LSExpSplines1E3. Figure 8: Comparison between Shift, Twist and Buttery between LSExp1E0 and LSExpSplines1E3. 16

19 Figure 9: Comparison between Shift, Twist and Buttery between LSExp1E0 and LSExpSplines1E3. Figure 10: Comparison between Shift, Twist and Buttery between LSExp1E0 and LSExpSplines1E6. With p = 1 the forward rate curves are much smoother, but at the same time the rst principal component is very straight. In all evaluations, the rst principal component derails a lot, which could be the cause of that the forward rate curves constructed do not have any requirement to be dampened in the last knot point. If a requirement that does not dampen the rst derivative in the last knot point causes no "nancial" shift in the forward rate curve to be observed. This could be solved by adding a condition for the last knot point similarly to Deventer & Imai The model is therefore very sensitive to parameter congurations, but for spline congurations without the early knot points a/ and 2/ in combination with a penalty parameter of p = 1000 the model is considered stable enough to construct reliable forward rate curves over time. These congurations are also considered to give a good enough balance between pricing errors and smoothness in the forward rate curve. A weakness that was identied in the model is that the rst principal component does not show a real parallell shift, but instead derails towards longer maturities, probably due to no condition for the rst derivative in the last knot point. 17

20 References [1] J. Blomvall and M. Ndengo, Accurate measurements of yield curves and their systematic risk factors,

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

COS 424: Interacting with Data

COS 424: Interacting with Data COS 424: Interacting with Data Lecturer: Rob Schapire Lecture #14 Scribe: Zia Khan April 3, 2007 Recall from previous lecture that in regression we are trying to predict a real value given our data. Specically,

More information

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September

More information

Ross (1976) introduced the Arbitrage Pricing Theory (APT) as an alternative to the CAPM.

Ross (1976) introduced the Arbitrage Pricing Theory (APT) as an alternative to the CAPM. 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas Plan of Class 4 Radial Basis Functions with moving centers Multilayer Perceptrons Projection Pursuit Regression and ridge functions approximation Principal Component Analysis: basic ideas Radial Basis

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a TE Linear Algebra and Numerical Methods Tutorial Set : Two Hours. (a) Show that the product AA T is a symmetric matrix. (b) Show that any square matrix A can be written as the sum of a symmetric matrix

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

Reduction of two-loop Feynman integrals. Rob Verheyen

Reduction of two-loop Feynman integrals. Rob Verheyen Reduction of two-loop Feynman integrals Rob Verheyen July 3, 2012 Contents 1 The Fundamentals at One Loop 2 1.1 Introduction.............................. 2 1.2 Reducing the One-loop Case.....................

More information

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class). Math 3, Fall Jerry L. Kazdan Problem Set 9 Due In class Tuesday, Nov. 7 Late papers will be accepted until on Thursday (at the beginning of class).. Suppose that is an eigenvalue of an n n matrix A and

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

University of Karachi

University of Karachi ESTIMATING TERM STRUCTURE OF INTEREST RATE: A PRINCIPAL COMPONENT, POLYNOMIAL APPROACH by Nasir Ali Khan A thesis submitted in partial fulfillment of the requirements for the degree of B.S. in Actuarial

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)

A nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable) Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :

More information

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over Numerical Integration for Multivariable Functions with Point Singularities Yaun Yang and Kendall E. Atkinson y October 199 Abstract We consider the numerical integration of functions with point singularities

More information

Notes on Dantzig-Wolfe decomposition and column generation

Notes on Dantzig-Wolfe decomposition and column generation Notes on Dantzig-Wolfe decomposition and column generation Mette Gamst November 11, 2010 1 Introduction This note introduces an exact solution method for mathematical programming problems. The method is

More information

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

Math 215 HW #11 Solutions

Math 215 HW #11 Solutions Math 215 HW #11 Solutions 1 Problem 556 Find the lengths and the inner product of 2 x and y [ 2 + ] Answer: First, x 2 x H x [2 + ] 2 (4 + 16) + 16 36, so x 6 Likewise, so y 6 Finally, x, y x H y [2 +

More information

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and 7.2 - Quadratic Forms quadratic form on is a function defined on whose value at a vector in can be computed by an expression of the form, where is an symmetric matrix. The matrix R n Q R n x R n Q(x) =

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Constrained Leja points and the numerical solution of the constrained energy problem

Constrained Leja points and the numerical solution of the constrained energy problem Journal of Computational and Applied Mathematics 131 (2001) 427 444 www.elsevier.nl/locate/cam Constrained Leja points and the numerical solution of the constrained energy problem Dan I. Coroian, Peter

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

On the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract

On the Quadratic Convergence of the. Falk-Langemeyer Method. Ivan Slapnicar. Vjeran Hari y. Abstract On the Quadratic Convergence of the Falk-Langemeyer Method Ivan Slapnicar Vjeran Hari y Abstract The Falk{Langemeyer method for solving a real denite generalized eigenvalue problem, Ax = Bx; x 6= 0, is

More information

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2 MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero

More information

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems Dierential Equations (part 3): Systems of First-Order Dierential Equations (by Evan Dummit, 26, v 2) Contents 6 Systems of First-Order Linear Dierential Equations 6 General Theory of (First-Order) Linear

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog, A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact Data Bob Anderssen and Frank de Hoog, CSIRO Division of Mathematics and Statistics, GPO Box 1965, Canberra, ACT 2601, Australia

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B. EE { QUESTION LIST EE KUMAR Spring (we will use the abbreviation QL to refer to problems on this list the list includes questions from prior midterm and nal exams) VECTORS AND MATRICES. Pages - of the

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

A fast algorithm to generate necklaces with xed content

A fast algorithm to generate necklaces with xed content Theoretical Computer Science 301 (003) 477 489 www.elsevier.com/locate/tcs Note A fast algorithm to generate necklaces with xed content Joe Sawada 1 Department of Computer Science, University of Toronto,

More information

Chapter 3: Vector Spaces x1: Basic concepts Basic idea: a vector space V is a collection of things you can add together, and multiply by scalars (= nu

Chapter 3: Vector Spaces x1: Basic concepts Basic idea: a vector space V is a collection of things you can add together, and multiply by scalars (= nu Math 314 Topics for second exam Technically, everything covered by the rst exam plus Chapter 2 x6 Determinants (Square) matrices come in two avors: invertible (all Ax = b have a solution) and noninvertible

More information

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive

More information

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables,

Economics 472. Lecture 10. where we will refer to y t as a m-vector of endogenous variables, x t as a q-vector of exogenous variables, University of Illinois Fall 998 Department of Economics Roger Koenker Economics 472 Lecture Introduction to Dynamic Simultaneous Equation Models In this lecture we will introduce some simple dynamic simultaneous

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013 Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 013 August 8, 013 Solutions: 1 Root Finding (a) Let the root be x = α We subtract α from both sides of x n+1 = x

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Lecture 11: Eigenvalues and Eigenvectors

Lecture 11: Eigenvalues and Eigenvectors Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()

More information

! 4 4! o! +! h 4 o=0! ±= ± p i And back-substituting into the linear equations gave us the ratios of the amplitudes of oscillation:.»» = A p e i! +t»»

! 4 4! o! +! h 4 o=0! ±= ± p i And back-substituting into the linear equations gave us the ratios of the amplitudes of oscillation:.»» = A p e i! +t»» Topic 6: Coupled Oscillators and Normal Modes Reading assignment: Hand and Finch Chapter 9 We are going to be considering the general case of a system with N degrees of freedome close to one of its stable

More information

CSE 554 Lecture 7: Alignment

CSE 554 Lecture 7: Alignment CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

H. L. Atkins* NASA Langley Research Center. Hampton, VA either limiters or added dissipation when applied to

H. L. Atkins* NASA Langley Research Center. Hampton, VA either limiters or added dissipation when applied to Local Analysis of Shock Capturing Using Discontinuous Galerkin Methodology H. L. Atkins* NASA Langley Research Center Hampton, A 68- Abstract The compact form of the discontinuous Galerkin method allows

More information

Inversion Base Height. Daggot Pressure Gradient Visibility (miles)

Inversion Base Height. Daggot Pressure Gradient Visibility (miles) Stanford University June 2, 1998 Bayesian Backtting: 1 Bayesian Backtting Trevor Hastie Stanford University Rob Tibshirani University of Toronto Email: trevor@stat.stanford.edu Ftp: stat.stanford.edu:

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Principal Components Analysis (PCA) a technique for finding patterns in data of high dimension Outline:. Eigenvectors and eigenvalues. PCA: a) Getting the data b) Centering

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

LQ Control of a Two Wheeled Inverted Pendulum Process

LQ Control of a Two Wheeled Inverted Pendulum Process Uppsala University Information Technology Dept. of Systems and Control KN,HN,FS 2000-10 Last rev. September 12, 2017 by HR Reglerteknik II Instruction to the laboratory work LQ Control of a Two Wheeled

More information

Riemann Hypotheses. Alex J. Best 4/2/2014. WMS Talks

Riemann Hypotheses. Alex J. Best 4/2/2014. WMS Talks Riemann Hypotheses Alex J. Best WMS Talks 4/2/2014 In this talk: 1 Introduction 2 The original hypothesis 3 Zeta functions for graphs 4 More assorted zetas 5 Back to number theory 6 Conclusion The Riemann

More information

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv Math 1270 Honors ODE I Fall, 2008 Class notes # 1 We have learned how to study nonlinear systems x 0 = F (x; y) y 0 = G (x; y) (1) by linearizing around equilibrium points. If (x 0 ; y 0 ) is an equilibrium

More information

Classnotes - MA Series and Matrices

Classnotes - MA Series and Matrices Classnotes - MA-2 Series and Matrices Department of Mathematics Indian Institute of Technology Madras This classnote is only meant for academic use. It is not to be used for commercial purposes. For suggestions

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

is equal to = 3 2 x, if x < 0 f (0) = lim h = 0. Therefore f exists and is continuous at 0.

is equal to = 3 2 x, if x < 0 f (0) = lim h = 0. Therefore f exists and is continuous at 0. Madhava Mathematics Competition January 6, 2013 Solutions and scheme of marking Part I N.B. Each question in Part I carries 2 marks. p(k + 1) 1. If p(x) is a non-constant polynomial, then lim k p(k) (a)

More information

q-alg/ Mar 96

q-alg/ Mar 96 Integrality of Two Variable Kostka Functions Friedrich Knop* Department of Mathematics, Rutgers University, New Brunswick NJ 08903, USA knop@math.rutgers.edu 1. Introduction q-alg/9603027 29 Mar 96 Macdonald

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b R utcor Research R eport Bounding in Multi-Stage Stochastic Programming Problems Olga Fiedler a Andras Prekopa b RRR 24-95, June 1995 RUTCOR Rutgers Center for Operations Research Rutgers University P.O.

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Covariance to PCA. CS 510 Lecture #8 February 17, 2014 Covariance to PCA CS 510 Lecture 8 February 17, 2014 Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday

More information