A relaxation of the strangeness index

Size: px
Start display at page:

Download "A relaxation of the strangeness index"

Transcription

1 echnical report from Automatic Control at Linköpings universitet A relaxation of the strangeness index Henrik idefelt, orkel Glad Division of Automatic Control tidefelt@isy.liu.se, torkel@isy.liu.se 22nd February 2010 Report no.: LiH-ISY-R-2932 Address: Department of Electrical Engineering Linköpings universitet SE Linköping, Sweden WWW: AUOMAIC CONROL REGLEREKNIK LINKÖPINGS UNIVERSIE echnical reports from the Automatic Control group in Linköping are available from

2 Abstract A new index closely related to the strangeness index of a differential-algebraic equation is presented. Basic properties of the strangeness index are shown to be valid also for the new index. he definition of the new index is conceptually simpler than that of the strangeness index, hence making it potentially better suited for both practical applications and theoretical developments. Keywords: Differential-algebraic equations, strangeness index

3 1 Introduction Kunkel and Mehrmann have developed a theory for analysis and numerical solution of differential-algebraic equations. he theory centers around the strangeness index, which differs from the differentiation index in that it does not consider the derivatives of the solution to be independent of the solution itself at each time instant. Instead, it takes the tangent space of the manifold of solutions into account, thereby reducing the number of dimensions in which the derivative has to be determined. he book Kunkel and Mehrmann (2006) covers the theory well and will be the predominant reference used in the paper. he numerical solution procedure applies to general nonlinear differential-algebraic equations of higher indices, and is currently the only one we know of that can handle such problems, let be that it does not provide a sensitivity analysis. Our interest in this matter is mostly due to this capability. he present paper presents highlights from the corresponding chapter in the first author s thesis idefelt (2009). 2 wo definitions In this section, two index definitions will be presented along with some basic properties of each. he one to be presented first is the strangeness index, found in Kunkel and Mehrmann (2006). he second, which is proposed as an alternative, is called the simplified strangeness index. Both are based on the derivative array equations. 2.1 Derivative array equations and the strangeness index As always when working with dae, it is crucial to be aware that the solutions are restricted to a manifold. In practice, one is interested in obtaining equations describing that manifold, and the way this is done in the present paper is by using the derivative array introduced in Campbell (1993). Consider the dae f ( x(t), x (t), t )! = 0 (1) Assuming sufficient differentiability of f and of x, the idea is that the original dae is completed with derivatives of the equations with respect to time. his will introduce higher order derivatives of the solution, but the key idea is that, given values of x(t), it suffices to be able to determine x (t) in order to compute a numerical solution to the equations. hat is, higher order derivatives such as x (t) may appear in the equations, but are not necessary to determine. If the completion procedure is continued until the derivative array equations are one-full with respect to x (t), the procedure has revealed the differentiation index of the dae. he meaning of one-full is defined in terms of the equation considered pointwise in time, so that a variable and its derivative become independent variables. We emphasize the independence by using the variable ẋ(t) instead of x (t), where the dot is just an ornament, while the prime is an operator. he equations are then said to be one-full if they determine ẋ(t) uniquely within some open ball, given x(t) and t. An equivalent characterization can be made in terms of the Jacobian of the derivative array with respect to its differentiated variables; then the equations are one-full if and only if row operations can bring the Jacobian into block diagonal form, with a non-singular block in the block column corresponding to derivatives with respect to ẋ(t) (clearly, this shows that it is possible to solve for ẋ(t) without knowing the variables corresponding to higher order derivatives of x at time t). However, instead of requiring that the completed equations be one-full, it turns out that there are good reasons for using the weaker requirement that the equations display the strangeness index instead. he definition of strangeness index will soon be considered in detail. It turns out that equations displaying the strangeness index determine x (t) uniquely if one takes into account the connection between x(t) and x (t) being imposed by the non-differential constraints which locally describe the solution manifold. Strangeness-free equations (strangeness index 0) are suitable for numerical integration.(kunkel and Mehrmann, 1996) In the sequel, it will be convenient to speak of properties which hold on non-empty open balls inside the set { ( L ) } νs = t, x, ẋ,..., ẋ ν S +1 : F νs ( x, ẋ,..., ẋ (νs+1), t ) =! 0 (2)

4 2.1 Definition (Strangeness index). he strangeness index ν S at ( t 0, x 0 ) is defined as the smallest number (or if no such number exists) such that the derivative array equations 1 satisfy the following properties on for some δ > 0. P1a [2.1 F νs ( t, x(t), x (t), x (2) (t),..., x (ν S+1) (t) )! = 0 (3) L νs { ( t, x, ẋ,..., ẋ ν S +1 ) : t B t0 (δ) x B x0 (δ) } } {{ } here shall exist a constant number n a such that the rank of M νs ( t, x, ẋ,..., ẋ (ν S+1) ) = b δ [ FνS ( t, x, ẋ,..., ẋ (ν S +1) ) ẋ... F νs ( t, x, ẋ,..., ẋ (ν S +1) ) ẋ (ν S +1) is pointwise equal to ( ν S +1 ) n x n a, and shall exist a smooth matrix-valued function Z 2 with n a pointwise linearly independent columns such that Z 2 M ν S = 0. P1b [2.1 Let n d = n x n a, and let A νs = Z 2 N ν S where N νs ( t, x, ẋ,..., ẋ (νs+1) ) = F ν S ( t x, ẋ,..., ẋ (νs+1) ) x hen the rank of A νs shall equal n a, and there shall exist a smooth matrix-valued function X with n d pointwise linearly independent columns such that A νs X = 0. P1c [2.1 he rank of 2 f X shall be full, and there shall exist a smooth matrix-valued function Z 1 with n d pointwise linearly independent columns such that Z1 2f X is non-singular. Assume that the strangeness index is finite. hen the dimension of the solution manifold is n d = n x n a. P1b [2.1 then states that it is possible to construct a local coordinate map x(t) = φ 1 ( x d (t), t ) with coordinates x d, determined by the partial differential equation 1 φ 1 ( x d (t), t )! = X( x(t), t ) (4) where the columns of X are smooth functions and pointwise linearly independent. he columns of X can be selected as an orthonormal basis for the right null space of the matrix A, and the local coordinates x d are denoted the dynamic variables. he last property, P1c [2.1, is finally there to ensure that the time derivative of the local coordinates on the solution manifold are determined by the original equation (1). Replacing (1) by an equation with residual expressed only through the dynamic variables, f d ( x d, ẋ d, t ) = f ( φ 1 ( x d, t ), 1 φ 1 ( x d, t ) ẋ d + 2 φ 1 ( x d, t ), t ) (5) property P1c [2.1 states that the Jacobian with respect to ẋ d, 2 f d ( x d, ẋ d, t ) = 2 f ( φ 1 ( x d, t ), 1 φ 1 ( x d, t ) ẋ d + 2 φ 1 ( x d, t ), t ) X (6) is full-rank. Since there are only n d derivatives to be determined, and there are n x equations, there are n a more equations than unknowns. he property P1c [2.1 also states that n d linear combinations, given by the columns of Z 1, of the equations in (1) can be chosen smoothly and linearly independent (and hence orthonormal), so that these linear combinations are sufficient to determine the time derivatives of the dynamic variables. We now end this section with a lemma that will be useful later. 2.2 Lemma. If it is known that ν S ˆν, and the matrix [ N ˆν M ˆν does not have full row rank, then νs =. Proof: Let i ˆν. he upper part of [ N i M i equals [ N ˆν M ˆν 0, so [ N i M i cannot have full row rank. It follows that P1b [2.1 cannot be satisfied for ν S = i. 1 he notation is defined such that x (1) = x. 2

5 2.2 he simplified strangeness index By discretizing the derivatives (using a bdf method) in the original equation (1) (and scaling the equations by the step length), we get that the gradient of these equations with respect to x tends to 2 f ( x, ẋ, t ) as the step length tends to zero. Hence, joining these equations with the full derivative array equations (where no derivatives are discretized) yields a set of equations which (locally) shall determine x uniquely. his leads to the following definition. 2.3 Definition (Simplified strangeness-index). he simplified strangeness index ν q at ( t 0, x 0 ) is defined as the smallest number (or if no such number exists) such that the derivative array equations satisfy the following property on F νq ( t, x(t), x (t), x (2) (t),..., x (ν q+1) (t) )! = 0 (7) L νq { ( t, x, ẋ,..., ẋ ν q +1 ) : t B t0 (δ) x B x0 (δ) } } {{ } for some δ > 0. P2 [2.3 Let H νq = f ( x, ẋ, t ) ẋ F ν q ( t, x, ẋ,..., ẋ(ν q+1) ) 2 f 0 = N νq x M νq b δ F ν q ( t, x, ẋ,..., ẋ(ν q+1) ) ẋ... F ν q ( t, x, ẋ,..., ẋ(ν q+1) ) ẋ (ν q+1) where N νq and M νq are defined as in definition 2.1. hen it shall hold that I 0 [! 2 f 0 rank 2 f 0 = rank N νq M νq N νq M νq hat is, the basis vectors corresponding to x shall be in the span of the rows of H νq, which may be recognized as the property of H νq being one-full. he property P2 [2.3 can be interpreted as that there is no freedom in the x components of the solution to ( h f ( x, 1 h ( x ) q 1 x ), t )! F νq ( t, x, ẋ,..., ẋ (νq+1) = 0 ) since adding additional equations for the x variables alone does not decrease the solution space of the linearized equations. For theoretic considerations, however, the continuous-time interpretation provided by lemma 4.4 below is more relevant. Of course, we must show what the simplified strangeness index is for the inevitable pendulum. Example 1 Let us consider the following familiar model of a pendulum. ξ ξ u u f y, ẏ, t = v v λ λ λ ξ u λ y g v ξ 2 + y 2 1 ξ u ẏ v We consider initial conditions where the pendulum is in motion and neither x nor y is zero. o check P2 [2.3 we look at the projection of a basis for the right null space of H i onto the space spanned by the basis vectors corresponding to x, for i = 1, 2,..., ν q. (he projection is implemented by just keeping the 3

6 five first entries of the vectors.) he basis for the null space is computed using Mathematica, and for i = 0, 1, 2 the projected basis vectors are, in order, , 0 0, 0, 0 0, 0, ξ y y ẏ ξ ξ y ẏ ξ ξ y Assuming that the symbolic null space computations are valid in some neighborhood of the initial conditions inside L i, it is seen that the λ component is undetermined for i = 0 and i = 1, and as all components are determined for i = 2 we get ν q = 2. 3 Relations In this section, the two indices ν S and ν q will be shown to be closely related. his is done by means of a matrix decomposition developed for this purpose. We first show the matrix decomposition, and then interpret the two definitions in terms of this decomposition. 3.1 Lemma. he matrix [ N M where N R k l, M R k k, rank M = k a, a 1, can be decomposed as Q [ [ [ 3, Σ 0 Q N M 3,2 0 = Q1,1 Q 1,2 A Σ 1 Q1,1 N Q 2,1 0 Q 2,2 In this decomposition, the left matrix is unitary, as are the diagonal blocks of the right matrix. he matrix Σ is a diagonal matrix of the non-zero singular values of M. he matrix A is square. 3.2 heorem. Definition 2.1 and definition 2.3 satisfy the relation ν S ν q. Proof: Suppose that the strangeness index is ν S and finite, as the infinite case is trivial. Let the matrices N and M in lemma 3.1 correspond to M νs and N νs as in definition 2.1. First, let us consider ν S in view of this decomposition. he left null space of M is spanned by Q 1,2, and making these linear combinations of N results in Q1,2 N = [ A Q3,1 Q3,2 Σ 1 Q1,1 N = [ A 0 [ Q3,1 Q 3,2 0 where A has full rank due to P1b [2.1. his matrix determines the tangent space of the non-differential constraints as being its null space, spanned by the independent columns of Q 3,2. Hence, we can parameterize x as x = Q 3,2 x d. urning to ν q, we follow the constructive interpretation of P2 [2.3 in section 2.2. he right null space of [ N M is spanned by the second and fourth rows of the right factor in the decomposition; [ ( ) x! N M = 0 y ( ) [ ( ) (8) x Q3,2 0 z1 z 1, z 2 : = y 0 Q 2,2 z 2 Extracting the part of this equation which only involves x, we find that it can be parameterized in z 1 alone, and since the columns of Q 3,2 are independent, we can use z 1 as dynamic variables; x = Q 3,2 x d. 4

7 Since the strangeness index is ν S, 2 f Q 3,2 has full column rank according to P1c [2.1. Hence, [ ( ) 2 f 0 x! = 0 N M y ( ) ( ) (9) x 0 z 2 : = y Q 2,2 z 2 which is exactly the condition captured by P2 [2.3. Since ν q is the smallest index such that this condition is satisfied, it is no greater than ν S. 3.3 heorem. Definition 2.1 and definition 2.3 satisfy the relation ν S { ν q, }, with ν S = ν q if and only if ν q = or the following property holds. P1 [3.3 he matrix [ N νq M νq has full row rank on the set Lνq b δ in definition 2.3. hat is, rank [ N νq M νq = ( νq + 1 ) n x (10) Proof: In view of theorem 3.2, the statement is trivial in the case ν q =. Hence, consider ν q <, in which case it shall to be shown that ν S = ν q when P1 [3.3 holds, and ν S = otherwise. he latter case follows immediately from lemma 2.2, so it remains to consider the case when P1 [3.3 holds. Due to theorem 3.2 it suffices to show ν S ν q. Let the matrices N and M in lemma 3.1 correspond to M νq and N νq as in definition 2.3. he rank condition (10) implies that A in lemma 3.1 is non-singular. Consider (8) and (9). Since adding the equation 2 f x =! 0 is sufficient to conclude x = 0 given x = Q 3,2 z 1, it is! seen that 2 f Q 3,2 z 1 = 0 must imply z1 = 0. his is only true if 2 f Q 3,2 has full column rank, which shows that P1c [2.1 holds. Since ν S is the smallest index such that this condition is satisfied, it is no greater than ν q. 4 Uniqueness and existence of solutions he present section gives a result corresponding to what Kunkel and Mehrmann (2006, theorem 4.13) states for the strangeness index. As the difference between the two index definitions is basically a matter of whether P1 [3.3 is required or not, the main ideas in Kunkel and Mehrmann (2006) apply here as well. 4.1 Lemma. If the simplified strangeness index ν q is finite, there exist matrix functions Z 1, Z 2, X, similar to those in definition 2.1. hey are all smooth with pointwise linearly independent columns, satisfying Z 2 M ν q = 0 and columns of Z 2 span left null space of M νq (11a) Z 2 N ν q X = 0 and columns of X span right null space of Z 2 N ν q (11b) Z 1 2f X is non-singular (11c) Proof: Using the decomposition of lemma 3.1, we may take Z 2 Q 1,2 and X = Q 3,2. As in the proof of theorem 3.3, (8) and (9) then imply that 2 f X has full column rank, and the existence of Z 1 follows. Multiplying the relations in (11) by smooth pointwise non-singular matrix functions shows that the matrix functions Z 1, Z 2, X are not unique, but they can be replaced by any smooth matrices with columns spanning the same linear spaces. For numerical purposes, the smooth Gram-Schmidt orthonormalization procedure may be used to obtain matrices with good numerical properties, while the theoretical argument of the present section benefits from another choice, to be derived next. Select the non-singular constant matrix P = [ P d P a such that Z 2 N νq P a is non-singular in a neighborhood of the initial conditions, and make a change of the un-dotted variables in L νq according to x = [ ( ) x P d P d a (12) x a 5

8 he following notation will turn out to be convenient later (note that N a ν q is non-singular) N d ν q = Z 2 N νq P d N a ν q = Z 2 N νq P a (13) he next result corresponds to Kunkel and Mehrmann (2006, corollary 4.10) for the strangeness index. 4.2 Lemma. here exists a smooth function R such that inside L νq, in a neighborhood of the initial conditions. Proof: In L νq it holds that F νq = 0 and Z 2 M ν q = 0, and it follows that Z2 F ν q ẋ (1+) = Z2 x a = R( x d, t ) (14) F νq ẋ (1+) + Z 2 ẋ (1+) F ν q = 0 Hence, the construction of Z 2 is such that Z 2 F ν q only depends on t and x, and the change of variables (12) was selected so that the part of the Jacobian corresponding to x a is non-singular. It follows that x a can be expressed locally as a function of x d and t. We now introduce the function φ 1 to describe the local parameterization of x using the coordinates x d and t, ( ) x = φ 1 ( x d, t ) = x P d (15) R( x d, t ) and the next lemma shows an important coupling between φ 1 and lemma Lemma. he matrix X in lemma 4.1 can be chosen in the form [ I ˆX = P = 1 R( x d, t ) 1 φ 1 ( x d, t ) (16) Proof: Clearly, the columns are linearly independent and smooth. By verifying that the matrix is in the right null space of Z 2 N ν q we will show that its column spans the same linear space as X. It will then follow that X and ˆX are related by a relation in the form ˆX = X W for some smooth non-singular matrix function W. Using the form X W then shows that (11c) is also satisfied. Hence, it remains to show that ˆX is in the right null space of Z 2 N ν q. Using (14) and allowing also the dotted variables ẋ (1+) to depend on x d in (suppressing arguments) it follows that Z2 F νq F x νq + Z2 + d x d F ν q Z 2 F ν q x d = 0 Z2 F νq F x νq + Z2 a x x a + a x d Here, F νq = 0 and Z2 = Z ẋ (1+) 2 M ν q = 0 implies that Z 2 F νq x d + Z 2 F νq x a 1 R = Z 2 2F νq [ Pd P a Z2 ẋ (1+) F ν q + Z2 F νq ẋ (1+) [ I = Z 1 R 2 N ν q ˆX =! 0 ẋ(1+) x d! = 0 Back in section 2.2 it was indicated that we would be able to show that a finite simplified strangeness index implies local uniqueness of solutions. With lemma 3.1 at our disposal this statement can now be shown rather easily. 4.4 Lemma. If the simplified strangeness index is finite and x is a solution to the dae for some initial conditions in L νq b δ, then the solution x is locally unique. 6

9 Proof: Using the parameterization of x given by (15), it suffices to show that the coordinates x d are uniquely defined. By the smoothness assumptions and the analytic implicit function theorem, Hörmander (1966), showing that x d (t) is uniquely determined given x d(t) and t will be sufficient, since then the corresponding ode will have a right hand side which is continuously differentiable, and hence locally Lipschitz on any compact set. One may then complete the argument by applying a basic local uniqueness theorem for ode, such as Coddington and Levinson (1985, theorem 2.2)). Reusing (5) for the current context, x d (t) is seen to be uniquely determined if 2f d ( x d, ẋ d, t ) is non-singular (in some neighborhood L νq b δ of the initial conditions). Identifying (6) in (11c), lemma 4.3 completes the proof. With ˆX according to (16) it follows that Z 2 N ν q ˆX = N d ν q + N a ν q 1 R! = 0 (17) using the notation (13). Before stating the main theorem of the section we derive one more equation. Using (14) and allowing also the dotted variables ẋ (1+) to depend on t in (suppressing arguments) it follows that Z 2 Z 2 F νq t! = 0 ( 1 F νq + 2 F νq 2 φ 1 ẋ (1+) ) + M νq = Z t 2( 1 F νq + 2 F νq 2 φ ) 1! = 0 (18) 4.5 heorem. Consider a sufficiently smooth dae (1), repeated here, f ( x(t), x (t), t )! = 0 (1) with finite simplified strangeness index ν q and where the un-dotted variables in L νq form a manifold of dimension n d. If the set where P2 [2.3 holds is the projection of a similar b δ+ L νq +1, and P2 [2.3 also holds on b δ+ L νq +1 with the same dimension n d, then there is a unique solution to (1) for any initial conditions in b δ+ L νq +1. Proof: Considering how F νq +1 is obtained form F νq, it is seen that the equality F νq +1 = 0 can be written 1 F νq + 2 F νq ẋ + 3+ F νq ẋ (2+) = 0 Multiplying by Z 2 from the left and identifying the expressions for N ν q and M νq, one obtains Z 2( 1 F νq + 2 F νq ẋ ) = 0 Using (18) and the change of variables (compare (12)) leads to (using the notation introduced in (13)) Using (15) and (17) yields ẋ = [ P d ( ) ẋ P d a ẋa [ N d νq N a ν q ( P 1 2 φ 1 + ) ) (ẋd = 0 ẋ a N a ν q 2 R( x d, t ) N a ν q 1 R( x d, t ) ẋ d + N a ν q ẋ a = 0 (19) and since Nν a q is non-singular, it must hold that ) ( ) (ẋd I ẋ = P = P ẋ ẋ a 1 R( x d, t ) d + P ( ) 0 2 R( x d, t ) = 1 φ 1 ( x d, t ) ẋ d + 2 φ 1 ( x d, t ) Since f ( x, ẋ, t )! = 0 holds by definition on L νq, it follows that f ( φ 1 ( x d, t ), 1 φ 1 ( x d, t ) ẋ d + 2 φ 1 ( x d, t ), t ) = 0 7

10 where ẋ d is uniquely determined given x d and t by (11c) with 1 φ 1 = ˆX in place of X. Hence, the dae f ( φ 1 ( x d (t), t ), 1 φ 1 ( x d (t), t ) x d (t) + 2φ 1 ( x d (t), t ), t ) = 0 has a (locally unique) solution and the trajectory generated by x(t) = φ 1 ( x d (t), t ) is a solution to the original dae (1). 5 Conclusions and future work In our view, a simpler way of computing the strangeness index has been proposed. While the original definition follows a three step procedure, the proposed definition has just one step. Once the index has been determined according to the new definition, it is known that the original definition leads to the same or an infinite index, and there is a simple test that distinguishes the two cases. he new index definition is also appealing due to its immediate interpretation from a numerical integration perspective. Analogues of central results for the original strangeness index have been derived for the simplified strangeness index. In particular, it has been shown that a finite simplified strangeness index implies that if a solution exists, it will be unique, and existence of a solution can be established by checking the property that defines the index for two successive values of the index parameter. An important aspect of the analysis of the strangeness index provided in Kunkel and Mehrmann (2006, chapter 4) is that the strangeness index is shown to be invariant under some transformations of the equations which are known to yield equivalent formulations of the same problem. It is an important topic for future research to find out whether the simplified strangeness index is also invariant under these transformations. Another interesting topic for future work is to seek examples where ν q ν S in order to get a better understanding of this exceptional case. 6 Acknowledgment he authors would like to acknowledge Ulf Jönsson at the Royal Institute of echnology, Sweden, for strengthening theorem 3.3. References Stephen L. Campbell. Least squares completions for nonlinear differential algebraic equations. Numerische Mathematik, 65(1):77 94, December Earl A. Coddington and Norman Levinson. heory of ordinary differential equations. Robert E. Krieger Publishing Company, Inc., third edition, Lars Hörmander. An introduction to complex analysis in several variables. he University Series in Higher Mathematics. D. Van Nostrand, Princeton, New Jersey, Peter Kunkel and Volker Mehrmann. A new class of discretization methods for the solution of linear differential-algebraic equations with variable coefficients. SIAM Journal on Numerical Analysis, 33(5): , October Peter Kunkel and Volker Mehrmann. Regular solutions of nonlinear differential-algebraic equations. Numerische Mathematik, 79(4): , June Peter Kunkel and Volker Mehrmann. Differential-algebraic equations, analysis and numerical solution. European Mathematical Society, Henrik idefelt. Differential-algebraic equations and matrix-valued singular perturbation. PhD thesis, Linköping University,

11 Avdelning, Institution Division, Department Datum Date Division of Automatic Control Department of Electrical Engineering Språk Language Rapporttyp Report category ISBN Svenska/Swedish Licentiatavhandling ISRN Engelska/English Examensarbete C-uppsats D-uppsats Övrig rapport Serietitel och serienummer itle of series, numbering ISSN URL för elektronisk version LiH-ISY-R-2932 itel itle A relaxation of the strangeness index Författare Author Henrik idefelt, orkel Glad Sammanfattning Abstract A new index closely related to the strangeness index of a differential-algebraic equation is presented. Basic properties of the strangeness index are shown to be valid also for the new index. he definition of the new index is conceptually simpler than that of the strangeness index, hence making it potentially better suited for both practical applications and theoretical developments. Nyckelord Keywords Differential-algebraic equations, strangeness index

Block diagonalization of matrix-valued sum-of-squares programs

Block diagonalization of matrix-valued sum-of-squares programs Block diagonalization of matrix-valued sum-of-squares programs Johan Löfberg Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden WWW:

More information

An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY

An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY Technical report from Automatic Control at Linköpings universitet An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY Daniel Ankelhed

More information

Properties and approximations of some matrix variate probability density functions

Properties and approximations of some matrix variate probability density functions Technical report from Automatic Control at Linköpings universitet Properties and approximations of some matrix variate probability density functions Karl Granström, Umut Orguner Division of Automatic Control

More information

Linköping University Electronic Press

Linköping University Electronic Press Linköping University Electronic Press Report Simulation Model of a 2 Degrees of Freedom Industrial Manipulator Patrik Axelsson Series: LiTH-ISY-R, ISSN 400-3902, No. 3020 ISRN: LiTH-ISY-R-3020 Available

More information

System analysis of a diesel engine with VGT and EGR

System analysis of a diesel engine with VGT and EGR System analysis of a diesel engine with VGT and EGR Master s thesis performed in Vehicular Systems by Thomas Johansson Reg nr: LiTH-ISY-EX- -5/3714- -SE 9th October 25 System analysis of a diesel engine

More information

Department of Physics, Chemistry and Biology

Department of Physics, Chemistry and Biology Department of Physics, Chemistry and Biology Master s Thesis Quantum Chaos On A Curved Surface John Wärnå LiTH-IFM-A-EX-8/7-SE Department of Physics, Chemistry and Biology Linköpings universitet, SE-58

More information

Model Reduction using a Frequency-Limited H 2 -Cost

Model Reduction using a Frequency-Limited H 2 -Cost Technical report from Automatic Control at Linköpings universitet Model Reduction using a Frequency-Limited H 2 -Cost Daniel Petersson, Johan Löfberg Division of Automatic Control E-mail: petersson@isy.liu.se,

More information

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581

More information

Lane departure detection for improved road geometry estimation

Lane departure detection for improved road geometry estimation Lane departure detection for improved road geometry estimation Thomas B. Schön, Andreas Eidehall, Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering Linköpings universitet,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Robust Control Of A Flexible Manipulator Arm: A Benchmark Problem

Robust Control Of A Flexible Manipulator Arm: A Benchmark Problem Robust Control Of A Flexible Manipulator Arm: A Benchmark Problem Stig Moberg, Jonas Öhr Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Position Estimation and Modeling of a Flexible Industrial Robot

Position Estimation and Modeling of a Flexible Industrial Robot Position Estimation and Modeling of a Flexible Industrial Robot Rickard Karlsson, Mikael Norrlöf, Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Numerical Treatment of Unstructured. Differential-Algebraic Equations. with Arbitrary Index

Numerical Treatment of Unstructured. Differential-Algebraic Equations. with Arbitrary Index Numerical Treatment of Unstructured Differential-Algebraic Equations with Arbitrary Index Peter Kunkel (Leipzig) SDS2003, Bari-Monopoli, 22. 25.06.2003 Outline Numerical Treatment of Unstructured Differential-Algebraic

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Generating state space equations from a bond graph with dependent storage elements using singular perturbation theory. Krister Edstrom Department of Electrical Engineering Linkoping University, S-58 83

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Kinematics. Chapter Multi-Body Systems

Kinematics. Chapter Multi-Body Systems Chapter 2 Kinematics This chapter first introduces multi-body systems in conceptual terms. It then describes the concept of a Euclidean frame in the material world, following the concept of a Euclidean

More information

Implementation of the GIW-PHD filter

Implementation of the GIW-PHD filter Technical reort from Automatic Control at Linöings universitet Imlementation of the GIW-PHD filter Karl Granström, Umut Orguner Division of Automatic Control E-mail: arl@isy.liu.se, umut@isy.liu.se 28th

More information

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u) Observability Dynamic Systems Lecture 2 Observability Continuous time model: Discrete time model: ẋ(t) = f (x(t), u(t)), y(t) = h(x(t), u(t)) x(t + 1) = f (x(t), u(t)), y(t) = h(x(t)) Reglerteknik, ISY,

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

On Consistency of Closed-loop Subspace Identifictaion with Innovation Estimation

On Consistency of Closed-loop Subspace Identifictaion with Innovation Estimation Technical report from Automatic Control at Linköpings universitet On Consistency of Closed-loop Subspace Identictaion with Innovation Estimation Weilu Lin, S Joe Qin, Lennart Ljung Division of Automatic

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

On linear quadratic optimal control of linear time-varying singular systems

On linear quadratic optimal control of linear time-varying singular systems On linear quadratic optimal control of linear time-varying singular systems Chi-Jo Wang Department of Electrical Engineering Southern Taiwan University of Technology 1 Nan-Tai Street, Yungkung, Tainan

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix

More information

L 2 Model Reduction and Variance Reduction

L 2 Model Reduction and Variance Reduction echnical report from Automatic Control at Linköpings universitet L 2 Model Reduction and Variance Reduction Fredrik järnström, Lennart Ljung Division of Automatic Control E-mail: fredrikt@iys.liu.se, ljung@isy.liu.se

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

β α : α β We say that is symmetric if. One special symmetric subset is the diagonal

β α : α β We say that is symmetric if. One special symmetric subset is the diagonal Chapter Association chemes. Partitions Association schemes are about relations between pairs of elements of a set Ω. In this book Ω will always be finite. Recall that Ω Ω is the set of ordered pairs of

More information

On Feedback Linearization for Robust Tracking Control of Flexible Joint Robots

On Feedback Linearization for Robust Tracking Control of Flexible Joint Robots Technical report from Automatic Control at Linköpings universitet On Feedback Linearization for Robust Tracking Control of Flexible Joint Robots Stig Moberg, Sven Hanssen Division of Automatic Control

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set Important definitions and results 1. Algebra and geometry of vectors Definition 1.1. A linear combination of vectors v 1,..., v k R n is a vector of the form c 1 v 1 + + c k v k where c 1,..., c k R are

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

Robust Heading Estimation Indoors

Robust Heading Estimation Indoors Technical report from Automatic Control at Linköpings universitet Robust Heading Estimation Indoors Jonas Callmer, David Törnqvist, Fredrik Gustafsson Division of Automatic Control E-mail: callmer@isy.liu.se,

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

Solutions to Review Problems for Chapter 6 ( ), 7.1

Solutions to Review Problems for Chapter 6 ( ), 7.1 Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row

More information

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Linear Algebra Final Exam Study Guide Solutions Fall 2012 . Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

More information

A NOTE ON THE JORDAN CANONICAL FORM

A NOTE ON THE JORDAN CANONICAL FORM A NOTE ON THE JORDAN CANONICAL FORM H. Azad Department of Mathematics and Statistics King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia hassanaz@kfupm.edu.sa Abstract A proof of the Jordan

More information

Inner Product and Orthogonality

Inner Product and Orthogonality Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

On Indirect Input Measurements

On Indirect Input Measurements Technical report from Automatic Control at Linköpings universitet On Indirect Input Measurements Jonas Linder, Martin Enqvist Division of Automatic Control E-mail: jonas.linder@liu.se, maren@isy.liu.se

More information

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th. Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

More information

DEVELOPMENT OF MORSE THEORY

DEVELOPMENT OF MORSE THEORY DEVELOPMENT OF MORSE THEORY MATTHEW STEED Abstract. In this paper, we develop Morse theory, which allows us to determine topological information about manifolds using certain real-valued functions defined

More information

1. Introduction. Consider the following parameterized optimization problem:

1. Introduction. Consider the following parameterized optimization problem: SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE

More information

A Weighting Method for Approximate Nonlinear System Identification

A Weighting Method for Approximate Nonlinear System Identification Technical report from Automatic Control at Linköpings universitet A Weighting Method for Approximate Nonlinear System Identification Martin Enqvist Division of Automatic Control E-mail: maren@isy.liu.se

More information

Robust finite-frequency H2 analysis

Robust finite-frequency H2 analysis Technical report from Automatic Control at Linköpings universitet Robust finite-frequency H2 analysis Alfio Masi, Ragnar Wallin, Andrea Garulli, Anders Hansson Division of Automatic Control E-mail: masi@dii.unisi.it,

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

! 4 4! o! +! h 4 o=0! ±= ± p i And back-substituting into the linear equations gave us the ratios of the amplitudes of oscillation:.»» = A p e i! +t»»

! 4 4! o! +! h 4 o=0! ±= ± p i And back-substituting into the linear equations gave us the ratios of the amplitudes of oscillation:.»» = A p e i! +t»» Topic 6: Coupled Oscillators and Normal Modes Reading assignment: Hand and Finch Chapter 9 We are going to be considering the general case of a system with N degrees of freedome close to one of its stable

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Institutionen för systemteknik

Institutionen för systemteknik Institutionen för systemteknik Department of Electrical Engineering Examensarbete Modeling and Identification of the Gantry-Tau Parallel Kinematic Machine Examensarbete utfört i Reglerteknik vid Tekniska

More information

Institutionen för systemteknik

Institutionen för systemteknik main: 2005-9-5 9:41 1(1) Institutionen för systemteknik Department of Electrical Engineering Examensarbete A Tracking and Collision Warning System for Maritime Applications Examensarbete utfört i Reglerteknik

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

A new approach to lane guidance systems

A new approach to lane guidance systems A new approach to lane guidance systems Andreas Eidehall, Jochen Pohl, Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal.

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal. ALGEBRAIC CHARACTERIZATION OF FREE DIRECTIONS OF SCALAR n-d AUTONOMOUS SYSTEMS DEBASATTAM PAL AND HARISH K PILLAI Abstract In this paper, restriction of scalar n-d systems to 1-D subspaces has been considered

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Institutionen för systemteknik

Institutionen för systemteknik Institutionen för systemteknik Department of Electrical Engineering Examensarbete Multiple Platform Bias Error Estimation Examensarbete utfört i Reglerteknik vid Tekniska högskolan i Linköping av Åsa Wiklund

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S,

Group, Rings, and Fields Rahul Pandharipande. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, Group, Rings, and Fields Rahul Pandharipande I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, A binary operation φ is a function, S S = {(x, y) x, y S}. φ

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information