On a Unied Representation of Some Interval Analytic. Dedicated to the professors of mathematics. L. Berg, W. Engel, G. Pazderski, and H.- W. Stolle.

Size: px
Start display at page:

Download "On a Unied Representation of Some Interval Analytic. Dedicated to the professors of mathematics. L. Berg, W. Engel, G. Pazderski, and H.- W. Stolle."

Transcription

1 Rostock. Math. Kolloq. 49, 75{88 (995) Subject lassication (AMS) 65F5, 65G, 65H Gunter Mayer On a Unied Representation of Some Interval Analytic Algorithms Dedicated to the professors of mathematics L. erg, W. Engel, G. Pazderski, and H.- W. Stolle. ASTRAT. In this article we show that several interval analytic algorithms for verifying solutions x of various mathematical problems can be viewed as special cases of some iterative method involving the rst and the second derivative of the underlying function. Starting with an approximation of x a way is given how to construct interval vectors which contain x, and how to improve these interval bounds. KEY WORDS. Systems of nonlinear equations, second order methods, enclosure methods, verication methods, interval methods for nonlinear systems, algebraic eigenproblem, singular value decomposition, quadratic systems, invariant supspaces Introduction The solutions of many mathematical problems can be expressed as zeros of some function f : D R n R n. Among these problems are the algebraic eigenproblem, the generalized eigenproblem, the singular value problem and the generalized singular value problem, see []{[4], [8]. For example, let A; 2 R nn ; 2 Rnfg and i 2 f; : : : ; ng be given. Then with x := (v T ; ) T, v 2 R n, 2 R the zeros x = ((v ) T ; ) T of the function f(x) := Av? v v i? are obviously eigenpairs of the generalized eigenproblem Av = v with the eigenvector v = (v i ) being normalized by v i = 6=. We will show how certain algorithms for verifying and enclosing solutions of the above problems can be derived from one verication method for general systems of nonlinear equations. To this end we will consider the interval ()

2 76 Gunter Mayer function [g]([x];ex) := t(ex) + ft (ex) + [H]([x];ex)g([x]? ex) (2) for which the function t : D R n R n is assumed to be twice continuously dierentiable on a given open set D. The matrix t (ex) is the Jacobian of t at some xed vector ex 2 D. The function [H] = [H]([x];ex) is dened for all interval vectors [x] D and has n n interval matrices as values. It is supposed to be continuous and inclusion monotone with respect to [x]. For t, [H] and all [x] D we require and t(x) 2 [g]([x];ex) for all x 2 [x] (3) k j[h]([x];ex)j k k j[x]? exj k: (4) The constant is positive and xed for all [x] D; it may depend on ex. Throughout the paper k k denotes the maximum norm of a real vector and the row sum norm of a real matrix, respectively; j j is the absolute value which we shall dene for interval quantities in our next section. There, we also show how t and [H] normally are related to the given function f, the zeros of which we are interested in. We shall derive criteria (Theorem ) for guaranteeing the existence of a xed point x of t which then turns out to be a zero of f. In verifying x we will construct an interval vector [x] which contains x and which in a natural way provides lower and upper bounds for it. Some of our criteria will yield the subset property [g]([x] ;ex) [x] which, together with rouwer's xed point theorem, forms the basis for many verication algorithms. We also will improve the enclosure [x] of x by considering the iteration [x] k+ := [g]([x] k ;ex) \ [x] k ; k = ; ; : : : ; where the intersection can be dropped if the above{mentioned subset property holds. Under slight additional assumptions on [H] we will show that x is the unique xed point of t within [x], that all the iterates [x] k contain it as element and that they contract to it for k. For the standard choice of t and [H] it will turn out that the function [g] reduces to that in Platzoder [4] and Alefeld [5]. Therefore, parts of our criteria are generalizations of results of these authors. 2 Results In order to formulate our results we rst list some notations needed later on. y IR, IR n, IR mn, respectively, we denote the set of real compact intervals, the set of vectors with n

3 On a Unied Representation of Some Interval Analytic Algorithms 77 interval components, and the set of m n matrices with entries from IR. For degenerate intervals [a; a] we simply write a identifying in this way R with f[a; a] j a 2 Rg IR. We proceed similarly for degenerate interval vectors and degenerate interval matrices. Examples are the null matrix O, the identity matrix I, the i{th column e (i) of I and the vector e := (; ; : : : ; ) T. In order to indicate I 2 R nn we sometimes write I n instead of I. As usual, we equip R n and R mn, respectively, with the natural semi{ordering `' which is dened to hold entrywise. We use the notation [A] = ([a] ij ) 2 IR mn simultaneously without further reference, and we assume the same for the elements of R n ; R mn ; IR and IR n. For [a] = [a; a] 2 IR we dene the absolute value j[a]j by j[a]j := maxfjaj j a 2 [a]g = maxfjaj; jajg and the diameter d([a]) by d([a]) := a? a, and we denote the convex hull of [a]; [b] 2 IR by [a][[b]. For interval vectors and interval matrices these terms are applied entrywise. If f(x) is an expression for some function f, we write f([x]) for the interval arithmetic evaluation of this expression (cf. [5] assuming that f([x]) exists. further details on interval analysis we refer to [5] or [3]. We start by presenting the main result of our paper. Among others, this result lists criteria for proving the existence of a xed point x of t from (2) and for guaranteeing the subset property for some interval vector [x]. [g]([x] ; ex) = t(ex) + ft (ex) + [H]([x] ;ex)g([x]? ex) [x] (5) Theorem With D; [g]; [H]; t; ex as in (2) { (4) and with > from (4) choose r 2 R with r > such that [x] := ex + [?r; r]e D, and dene ; by := k t(ex)? ex k; := k t (ex) k: For Let < ; := (? ) 2? 4 and let r? := (?? p )=(2); r + := (? + p )=(2): a) If r r? then t has at least one xed point x 2 [x]. The iteration [x] k+ := [g]([x] k ;ex) \ [x] k ; k = ; ; : : : converges to some interval vector [x] with x 2 [x] [x] k [x] k? : : : [x] ; k 2 N : b) If r 2 [r? ; r + ] then t has at least one xed point x 2 [x]. In addition, [g]([x] ; ex) [x] holds and the iteration [x] k+ := [g]([x] k ;ex); k = ; ; : : :

4 78 Gunter Mayer converges to some interval vector [x] with x 2 [x] [x] k [x] k? : : : [x] ; k 2 N : c) In addition to (4), let [H] fulll k d ([H]([x];ex)) k k d([x]) k (6) for all interval vectors [x] D and for some positive number which is independent of [x] but which may depend on ex. Dene ^; ^r? ; ^r + as ; r? ; r +, with being replaced by ^ := maxf; g. If ^ and if r 2 [^r? ; (^r? + ^r + )=2) then the function t has exactly one xed point x 2 [x] ; [g]([x] ;ex) [x] holds, and the iteration [x] k+ := [g]([x] k ; ex); k = ; ; : : : converges to x with x 2 [x] k [x] k? : : : [x] ; k 2 N : Proof: First we remark that the assumptions < and guarantee r? r +. b) Let [x] := ex + [?r; r]e. Then (5) is equivalent to [g]([x] ; ex)? ex [x]? ex: (7) Property (7) certainly holds if jt(ex)? exj +? jt (ex)j + j[h]([x] ;ex)j re re; and this, in turn, is true if + ( + r)r r; (8) where we used (4). Now (8) can be rewritten as r 2 + (? )r + (9) with equality for r = r? by (3) and r = r +. Hence (9) is fullled for each r 2 [r? ; r + ], and t(x) 2 [g]([x] ;ex) [x] holds for any x 2 [x]. Therefore, rouwer's xed point theorem guarantees that t has at least one xed point x 2 [x]. Since [H] was assumed to be inclusion monotone the iterates [x] k decrease monotonically with respect to the semi{ordering `'; hence they are convergent to some limit [x], and holds for k = ; ; : : : by induction. x = t(x ) 2 [g]([x] k ;ex) = [x] k+

5 On a Unied Representation of Some Interval Analytic Algorithms 79 a) hoose r r such that r 2 [r? ; r + ]. Then b) applies with [^x] := ex + [?r ; r ]e [x] yielding a xed point x 2 [^x] [x] of t. y the intersection in the denition of [x] k+ the iterates [x] k decrease monotonically with respect to `', thus the proof of a) terminates analogously to that of b). c) Since ^ we have ^ and r? = 2? + p ^r? ^r + r + = 2?? p : Therefore, r is contained in [r? ; r + ], and c) is proved by b) with the exception of the uniqueness of x and of the degeneracy of [x] = [x ; x ]. In order to show d([x] ) = apply d() to the equality [x] = [g]([x] ;ex). Then by the subdistributivity of the interval arithmetic and by elementary rules for the diameter (cf. [5] for example) one obtains d([x] ) d (t (ex)([x]? ex) + [H]([x] ;ex) ([x]? ex)) = jt (ex)j d([x] ) + d ([H]([x] ;ex) ([x]? ex)) jt (ex)j d([x] ) + d([h]([x] ;ex)) j[x]? exj + j[h]([x] ;ex)j d([x] ) jt (ex)j d([x] ) + d([h]([x] ;ex)) j[x]? exj + j[h]([x] ;ex)j d([x] ) Let d := kd([x] )k and apply k k to this inequality in order to get d d + rd + rd d + 2r^d : If d >, we obtain + 2r^ which yields the contradiction r? 2^ = ^r? + ^r + : 2 Therefore, d =, and x 2 [x] implies [x] = [x ; x ]. In particular, this proves uniqueness. Note that if ex is a suciently good approximation of a xed point x of t then will be small and the assumption will certainly be fullled provided that <. In practical applications, one often chooses [H]([x];ex) := 2 t ([x][ex)([x]? ex) 2 IR nn () where t (x) : ( R n R n R n (y; z) 7 t (x)(y; z) ()

6 8 Gunter Mayer is the second derivative of t = (t i ) at x 2 D. In () we assume, that t(x) (y) is dened by t (x)(y) := and in () we dene t(x) (y; z) by y T t (x). y T t n(x) A 2 R nn for x 2 D and y 2 R n ; t (x)(y; z) :=? y T t (x)z; : : : ; yt t n (x)z 2 R n ; with 2 t t i (x) i (x) := = x l x k 2 t i (x) x 2 2 t i (x) x x 2. 2 t i (x) x x n 2 t i (x) x 2 x : : : 2 t i (x) x 2 2. : : : 2 t i (x) : : : x 2 x n. 2 t i (x) x n x 2 t i (x) x n x 2. 2 t i (x) x 2 n A = (grad t i (x)) 2 R nn being the Hessian associated with t i (x). Note that k counts the rows while l counts the columns. The reason behind the choice of [H] according to () is the Taylor expansion of t at ex 2 [x] which we write in the form t(x) = t(ex) + ft (ex) + R(x;ex)g(x? ex) with the remainder term R(x; ex) (x?ex), where R(x;ex) 2 R nn. We remind that according to [7], p. 284, and by applying the extended mean value theorem, the entries r ij (x;ex) of R(x; ex) can be expressed as Z r ij (x;ex) = (x? ex) T 2 t i (ex + (x? ex)) (? ) d x j x k = Z 2 (? ) d (x? ex) T t i ( (ijk) ) x j x k = 2 (x? ex)t 2 t i x j x k ( (ijk) ) k=;::: ;n with (ijk) 2 R n between x and ex for i; j; k = ; : : : n. Hence k=;::: ;n k=;::: ;n R(x; ex) 2 2? ([x]? ex) T t i ([x][ex) = [H]([x];ex)

7 On a Unied Representation of Some Interval Analytic Algorithms 8 and t(x) 2 [g]([x];ex) for all x 2 [x] and [H] from (). In addition, [H]([x]; ex) is continuous and inclusion monotone since the function c([x]) := [x][ex has these properties and since [H]([x];ex) can be interpreted as the interval arithmetic evaluation of the expression 2 details.? (x? ex) T t (c(x)) ; cf.[5] or [3] for Assume now that a function f : D R n R n is given and that [x] D. We are interested in the zeros x 2 [x] of f. To this end use the transformation t(x) := x? f(x); 2 R nn nonsingular and independent of x: (2) Then the zeros of f are the xed points of t and vice versa. With t (x) = I? f (x) and with [H] from () we get [g]([x];ex) = ex? f(ex) + I? f (ex)? 2 (f) ([x][ex)([x]? ex) ([x]? ex): (3) For ex 2 [x] this is just the function k 3 in [5], p. 239, and k 7 in [4], p. 3. Therefore, it is not astonishing that Theorem a) and b) reduces to similar results as in [4], x 4, and in [5], x 9. For a comparison take into account the factor in (2). ut note that we will also 2 use [H] and [g] in Example 2 in a dierent meaning as in () and (3). This is caused by the possibility of representing the remainder term R(x; ex)(x? ex) in dierent ways, as is also shown in the following example. Example Let f(x) := x2? 2xy + y 2 f (x)(y; z) = for all x; y; z 2 R 2, whence Let 2?2 yt?2 2 f (x)(y) = A(y) := z 2 R 2, := I, t(x) := x? f(x). Then A = 2y (z? z 2 ) + 2y 2 (?z + z 2 ) 2?2 yt?2 2 2?4 yt 2 A 2 R 22 : (4) A 2 R 22 : (5)

8 82 Gunter Mayer 2y Then apparently f 2 (x)(y; y) =? 4y y 2 + 2y2 2 A(y) and therefore f (x)(y; z) 6= A(y)z, in general. = A(y)y holds although f (x)(y) 6= So we can represent t(x) by means of (4) as well as by means of (5). hoosing [H]([x];ex) := 2 (f) ([x][ex)([x]? ex) and [H]([x];ex) := A([x]? ex), respectively, yields two dierent admissible interval functions 2 [g], where the second one diers from (3). If ex is a suciently good approximation of a zero of f, if f (ex)? exists, and if f (ex)? then,, and r?, r + = for the quantities in Theorem. In particular, the assumptions < and of this theorem are fullled. We want to apply now Theorem with (2) and (2) to various problems of numerical analysis. Example 2 (The generalized eigenproblem with a simple real eigenvalue) onsider the generalized algebraic eigenproblem Av = v as in x. Let ev 2 R n be an approximation of an eigenvector which belongs to an algebraic simple eigenvalue. Let e be an approximation of this eigenvalue and use f from (), 2 R (n+)(n+) nonsingular, ex := (ev T ;e )T 2 R n+, and [v] 2 IR n. In [5] the interval function [g]([x];ex) := ex? f(ex) + ( I n+? A? e?[v]) (e (i ) ) T ) ([x]? ex) (6) was applied in order to verify eigenpairs of the generalized eigenproblem. With t(x) = x? f(x) as in (3) one gets t (ex) = I n+? A? e?ev (e (i ) ) T In [2] it was mentioned that for degenerate interval vectors [x] x the expression [g](x; ex) from (6) is the complete Taylor expansion of t(x) at ex even if ex 62 [x]. Therefore, t(x) 2 [g]([x];ex) holds trivially for all x 2 [x]. Nevertheless [g]([x];ex) is not in the form (3), i. e., [H]([x];ex) is not given by (). In fact, in order to obtain the representation (2) we have to dene [H]([x];ex) := O ([v]? ev) O : 2 IR (n+)(n+) : (7) One can recover this function if one expresses the last, i. e. third, Taylor summand 2 t (x) (x? ex; x? ex) appropriately and if one evaluates this expression interval arithmeti-

9 On a Unied Representation of Some Interval Analytic Algorithms 83 cally. We want to show that [H] fullls (4) and (6). From (7) we get O jj j[v]? evj j[h]([x];ex)j jj O and With := k jj jj d([h]([x];ex)) = jj k this implies k j[h]([x];ex)j k max i max i k= j= O jjd([v]) O jcj ik k= = k j[x]? exj k j= : jbj kj j[v]? evj j jcj ik jbj kj k j[x]? exj k and, analogously, Therefore, Theorem applies with ^ :=. k d([h]([x];ex)) k k d([x]) k: For practical applications there is also a modication of [g] which one gets by choosing in the form = A (8) with 2 R (n?)n, 2 2 R n?, T 2 2 R n?, 22 2 R. Theorem then reduces to a result in [4]. For details see [2]. Example 3 (The algebraic eigenproblem with a simple real eigenvalue) Here we start again with f as in (), where this time we choose := I. The results in Example 2 remain true, of course, and are therefore omitted. They can be found, in [9]{[2] where they have been derived in a dierent way. For the particular choice of in (8) they are already contained in []. We assumed that the eigenvalue to be enclosed is an algebraic simple one. This is due to the fact that only in this case f (x )? exists where x = (v T ; ) T is a corresponding

10 84 Gunter Mayer eigenpair; cf. Theorem 2 in [2] for details. Thus, for a suciently good approximation ex the inverse of f (ex) exists, and f (ex)? can be chosen as in the remark preceding the Example 2. Example 4 (Two{dimensional invariant subspaces) In order to enclose double or nearly double eigenvalues, Alefeld and Spreuer verify in [6] a basis of a two{dimensional subspace of R n which is invariant with respect to the linear mapping given by A 2 R nn. To this end they start with the function f(x) := Au? m u? m 2 v u i? " u i2? Av? m 2 u? m 22 v v i? v i2? A 2 R 2n+4 where x = (u T ; m ; m 2 ; v T ; m 2 ; m 22 ) T 2 R 2n+4, i 6= i 2 2 f; : : : ; ng and "? 6=. It is obvious that the vectors u ; v, which are part of a zero x = ((u ) T ; m ; m 2 ; (v ) T ; m 2 ; m 22) T of f, form a basis of such an invariant subspace. Note that u ; v are unique within a xed subspace because of the four normalization conditions which are hidden in f. Again we set t(x) := x? f(x) with a nonsingular matrix 2 R (2n+4)(2n+4), and we choose ex = (eu T ; em ; em 2 ;ev T ; em 2 ; em 22 ) T as an approximation of x. With and := [T ] := A? em I n?eu?ev?em 2 I n (e (i ) ) T (e (i2) ) T?em 2 I n A? em 22 I n?eu?ev (e (i ) ) T (e (i 2) ) T O [u]? eu [v]? ev O O O [u]? eu [v]? ev A A 2 R (2n+4)(2n+4) ; 2 IR (2n+4)(2n+4) [g]([x];ex) := ex? f(ex) + fi 2n+4? (? [T ])g([x]? ex)

11 On a Unied Representation of Some Interval Analytic Algorithms 85 we again have t (ex) = I 2n+4? and t(x) 2 [g]([x];ex) for all x; ex 2 R n. Note that [g](x;ex) is again the Taylor expansion of t(x) at ex; cf. [2], for example. Therefore, Theorem applies with and with [H]([x];ex) := [T ] ^ := := := 2jj(e T ; ; ; e T ; ; ) T 2k k; e := (; : : : ; ) T 2 R n : The results coincide with those in [6] and [2]. Example 5 (The singular value problem) Each rectangular real matrix A 2 R mn can be represented as A = V U T () AU = V () A T V = U T with orthogonal matrices U 2 R nn, V 2 R mm and with a rectangular diagonal matrix 2 R mn, where () ij := ( for i 6= j i for i = j ; 2 : : : r > r+ = : : : = minfm;ng = : The product V U T is called a singular value decomposition of A, the positive values i are called (non{trivial) singular values of A. Note that is unique while U and V are not. The i{th singular value and the i{th columns u i ; v i of U and V { so{called singular vectors { can be expressed as the zeros of the function f(x) := Au? v A T v? u u T u? where x := (u T ; v T ; ) T. If x = ((u ) T ; (v ) T ; ) T is a zero of f with 6= then A ; (v ) T v = (v ) T Au =? A T v T u = (u ) T u = : Let ex = (eu T ;ev T ;e) T. In [2] a slight modication of the interval function [g]([x];ex) := ex? f(ex) + 8 >< >: I m+n+?? O O [v]? ev O O [u]? eu ([u]? eu) T 9 A >= A>; ([x]? ex)

12 86 Gunter Mayer was used with := A?eI m?ev?ei n A T?eu 2eu T in order to verify singular values and corresponding singular vectors u; v of A. It is an easy task to prove that [g](x;ex) is again the complete Taylor expansion of t(x) := x? f(x) at x = ex with t (ex) = I?. As in Example 2 one easily checks that (4) and (6) hold for with [H]([x];ex) := A O O [v]? ev O O [u]? eu ([u]? eu) T := := k jj (; : : : ; ; n) T k: A : Therefore, Theorem applies with ^ =. f. also [2]. Example 6 (Quadratic systems) In this example we consider systems of equations of the form with b; x 2 R n ; A 2 R nn and with T (x; y) := t(x) := b + Ax + T (x; x) = x t ijk x k y j j= k= i=;::: ;n Note that t() = b; t () = A and t (x)(y; z) = 2T (y; z). Let [H]([x]; ) := T ([x]) := k= [g]([x]; ) := b + (A + T ([x]))[x]: t ijk [x] k 2 IR nn ; : In particular, ex = in this example. One easily sees that (4) and (6) hold with := := k k= jt ijk j k = max in ( n X j= k= jt ijk j ) (9) Therefore, Theorem applies with := kbk, := kak, with ; as in (9) and with ^ =. Its results coincide with those in [3] and [2].

13 On a Unied Representation of Some Interval Analytic Algorithms 87 References [] Alefeld, G. : erechenbare Fehlerschranken fur ein Eigenpaar unter Einschlu von Rundungsfehlern bei Verwendung des genauen Skalarprodukts. Z. Angew. Math. Mech. 67, 45{52 (987) [2] Alefeld, G. : Rigorous error bounds for singular values of a matrix using the precise scalar product. In: Kaucher, E., Kulisch, U., Ullrich, h. (eds.): omputerarithmetic. pp. 9{3. Stuttgart 987 [3] Alefeld, G. : Errorbounds for quadratic systems of nonlinear equations using the precise scalar product. In: Kulisch, U., Stetter, H. J. (eds.): Scientic omputation with Automatic Result Verication. omputing, Suppl. 6, 59{68 (988) [4] Alefeld, G. : erechenbare Fehlerschranken fur ein Eigenpaar beim verallgemeinerten Eigenwertproblem. Z. Angew. Math. Mech. 68, 8{84 (988) [5] Alefeld, G. and Herzberger, J. : Introduction to Interval omputations. New York 983 [6] Alefeld, G. and Spreuer, H. : Iterative improvement of componentwise errorbounds for invariant subspaces belonging to a double or nearly double eigenvalue. omputing 36, 32{334 (986) [7] Heuser, H. : Lehrbuch der Analysis. Teil 2. Stuttgart 98 [8] Homann, R. : Konstruktion von Fehlerschranken bei der verallgemeinerten Singularwertzerlegung und ihre iterative Verbesserung. Thesis, Universitat Karlsruhe 993 [9] Mayer, G. : Enclosures for eigenvalues and eigenvectors. In: Atanassova, L., Herzberger, J. (eds.): omputer Arithmetic and Enclosure Methods. pp. 49{68. Amsterdam 992 [] Mayer, G. : Taylor-Verfahren fur das algebraische Eigenwertproblem. Z. Angew. Math. Mech. 73, T857{T86 (993) [] Mayer, G. : A unied approach to enclosure methods for eigenpairs. Z. Angew. Math. Mech. 74, 5{28 (994) [2] Mayer, G. : Result verication for eigenvectors and eigenvalues. In: Herzberger, J. (ed.): Topics in Validated omputations. Studies in omputational Mathematics 5, pp. 29{276. Amsterdam 994

14 88 Gunter Mayer [3] Neumaier, A. : Interval Methods for Systems of Equations. ambridge 99 [4] Platzoder, L. : Einige eitrage uber die Existenz von Losungen nichtlinearer Gleichungssysteme und Verfahren zu ihrer erechnung. Thesis, erlin 98 [5] Rump, S. M. : Guaranteed inclusions for the complex generalized eigenproblem. omputing 42, 225{238 (989) received: September, 995 Author: Prof. Dr. G. Mayer Universitat Rostock Fachbereich Mathematik Universitatsplatz 85 Rostock Germany guenter.mayermathematik.uni-rostock.de

A theorem on summable families in normed groups. Dedicated to the professors of mathematics. L. Berg, W. Engel, G. Pazderski, and H.- W. Stolle.

A theorem on summable families in normed groups. Dedicated to the professors of mathematics. L. Berg, W. Engel, G. Pazderski, and H.- W. Stolle. Rostock. Math. Kolloq. 49, 51{56 (1995) Subject Classication (AMS) 46B15, 54A20, 54E15 Harry Poppe A theorem on summable families in normed groups Dedicated to the professors of mathematics L. Berg, W.

More information

Validating an A Priori Enclosure Using. High-Order Taylor Series. George F. Corliss and Robert Rihm. Abstract

Validating an A Priori Enclosure Using. High-Order Taylor Series. George F. Corliss and Robert Rihm. Abstract Validating an A Priori Enclosure Using High-Order Taylor Series George F. Corliss and Robert Rihm Abstract We use Taylor series plus an enclosure of the remainder term to validate the existence of a unique

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea Torsion of dierentials on toric varieties Klaus Altmann Institut fur reine Mathematik, Humboldt-Universitat zu Berlin Ziegelstr. 13a, D-10099 Berlin, Germany. E-mail: altmann@mathematik.hu-berlin.de Abstract

More information

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem Chapter 4 Inverse Function Theorem d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d dd d d d d This chapter

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008 Linear Algebra Chih-Wei Yi Dept. of Computer Science National Chiao Tung University November, 008 Section De nition and Examples Section De nition and Examples Section De nition and Examples De nition

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Interval Gauss-Seidel Method for Generalized Solution Sets to Interval Linear Systems

Interval Gauss-Seidel Method for Generalized Solution Sets to Interval Linear Systems Reliable Computing 7: 141 155, 2001. 141 c 2001 Kluwer Academic Publishers. Printed in the Netherlands. Interval Gauss-Seidel Method for Generalized Solution Sets to Interval Linear Systems SERGEY P. SHARY

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract A Finite Element Method for an Ill-Posed Problem W. Lucht Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D-699 Halle, Germany Abstract For an ill-posed problem which has its origin

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

9 Linear Interpolation and Estimation using Interval Analysis

9 Linear Interpolation and Estimation using Interval Analysis 9 Linear Interpolation and Estimation using Interval Analysis S. M. Markov and E. D. Popova 1 ABSTRACT This chapter considers interpolation and curve tting using generalized polynomials under bounded measurement

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is

More information

On the Approximation of Linear AE-Solution Sets

On the Approximation of Linear AE-Solution Sets On the Approximation of Linear AE-Solution Sets Alexandre Goldsztejn University of California, Irvine Irvine CA, USA agoldy@ics.uci.edu Gilles Chabert INRIA Sophia-Antipolis, France Gilles.Chabert@sophia.inria.fr

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

VICTORIA POWERS AND THORSTEN W ORMANN Suppose there exists a real, symmetric, psd matrix B such that f = x B x T and rank t. Since B is real symmetric

VICTORIA POWERS AND THORSTEN W ORMANN Suppose there exists a real, symmetric, psd matrix B such that f = x B x T and rank t. Since B is real symmetric AN ALGORITHM FOR SUMS OF SQUARES OF REAL POLYNOMIALS Victoria Powers and Thorsten Wormann Introduction We present an algorithm to determine if a real polynomial is a sum of squares (of polynomials), and

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 4, pp 1761 1772 c 2009 Society for Industrial and Applied Mathematics INTERVAL GAUSSIAN ELIMINATION WITH PIVOT TIGHTENING JÜRGEN GARLOFF Abstract We present a method

More information

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col

Rank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Roberto Castelli Basque Center for Applied Mathematics

Roberto Castelli Basque Center for Applied Mathematics A rigorous computational method to enclose the eigendecomposition of interval matrices Roberto Castelli Basque Center for Applied Mathematics Joint work with: Jean-Philippe Lessard Basque / Hungarian Workshop

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

LECTURE 5, FRIDAY

LECTURE 5, FRIDAY LECTURE 5, FRIDAY 20.02.04 FRANZ LEMMERMEYER Before we start with the arithmetic of elliptic curves, let us talk a little bit about multiplicities, tangents, and singular points. 1. Tangents How do we

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning

More information

Computing ~ by Springer-Verlag 1988

Computing ~ by Springer-Verlag 1988 Computing, Suppl. 6, 59-68 (1988) Computing ~ by Springer-Verlag 1988 Errorbounds for Quadratic Systems of Nonlinear Equations Using the Precise Scalar Product G. Alefeld,Karlsruhe Abstract - Zusammenfassung

More information

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in Intervals of Almost Totally Positive Matrices Jurgen Garlo University of Applied Sciences / FH Konstanz, Fachbereich Informatik, Postfach 100543, D-78405 Konstanz, Germany Abstract We consider the class

More information

Geometric Series Bounds for the Local Errors of Taylor Methods for Linear n-th Order ODEs

Geometric Series Bounds for the Local Errors of Taylor Methods for Linear n-th Order ODEs Appeared in: Alefeld, G., Rohn, J., Rump, S., Yamamoto, T. (eds): Symbolic Algebraic Methods and Verification Methods, pp. 183 193. Springer, Wien, 2001. Geometric Series Bounds for the Local Errors of

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Enclosure Methods. G. Alefeld Institut für Angewandte Mathematik Universität Karlsruhe Federal Republic of Germany

Enclosure Methods. G. Alefeld Institut für Angewandte Mathematik Universität Karlsruhe Federal Republic of Germany Enclosure Methods G. Alefeld Institut für Angewandte Mathematik Universität Karlsruhe Federal Republic of Germany Abstract: We present an overview on existing methods for inc1uding the range of functions

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The

More information

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016 Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Chapter 3: Vector Spaces x1: Basic concepts Basic idea: a vector space V is a collection of things you can add together, and multiply by scalars (= nu

Chapter 3: Vector Spaces x1: Basic concepts Basic idea: a vector space V is a collection of things you can add together, and multiply by scalars (= nu Math 314 Topics for second exam Technically, everything covered by the rst exam plus Chapter 2 x6 Determinants (Square) matrices come in two avors: invertible (all Ax = b have a solution) and noninvertible

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems

A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems W Auzinger and H J Stetter Abstract In an earlier paper we had motivated and described am algorithm for the computation

More information

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

On the Algebraic Properties of Convex Bodies

On the Algebraic Properties of Convex Bodies On the Algebraic Properties of Convex Bodies Svetoslav Markov Abstract The algebraic properties of the convex bodies are studied. A theorem of H. Rådström for embedding of convex bodies in a normed vector

More information

Linear Algebra (Math-324) Lecture Notes

Linear Algebra (Math-324) Lecture Notes Linear Algebra (Math-324) Lecture Notes Dr. Ali Koam and Dr. Azeem Haider September 24, 2017 c 2017,, Jazan All Rights Reserved 1 Contents 1 Real Vector Spaces 6 2 Subspaces 11 3 Linear Combination and

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

ON TAYLOR MODEL BASED INTEGRATION OF ODES

ON TAYLOR MODEL BASED INTEGRATION OF ODES ON TAYLOR MODEL BASED INTEGRATION OF ODES M. NEHER, K. R. JACKSON, AND N. S. NEDIALKOV Abstract. Interval methods for verified integration of initial value problems (IVPs for ODEs have been used for more

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

Journal of Universal Computer Science, vol. 4, no. 1 (1998), submitted: 25/9/97, accepted: 1/11/97, appeared: 28/1/98 Springer Pub. Co.

Journal of Universal Computer Science, vol. 4, no. 1 (1998), submitted: 25/9/97, accepted: 1/11/97, appeared: 28/1/98 Springer Pub. Co. Journal of Universal Computer Science, vol. 4, no. 1 (1998), 48-67 submitted: 25/9/97, accepted: 1/11/97, appeared: 28/1/98 Springer Pub. Co. Algebraic Solutions to a Class of Interval Equations Evgenija

More information

On the Solution of Linear Algebraic Equations Involving. Interval Coecients. S. Markov, E. Popova and Ch. Ullrich y

On the Solution of Linear Algebraic Equations Involving. Interval Coecients. S. Markov, E. Popova and Ch. Ullrich y On the Solution of Linear Algebraic Equations Involving Interval Coecients S. Markov, E. Popova and Ch. Ullrich y Institute of Biophysics, Bulgarian Academy of Sciences e-mail: smarkov@bgearn.bitnet y

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information