Back to the Roots: Solving Polynomial Systems with Numerical Linear Algebra Tools

Size: px
Start display at page:

Download "Back to the Roots: Solving Polynomial Systems with Numerical Linear Algebra Tools"

Transcription

1 Back to the Roots: Solving Polynomial Systems with Numerical Linear Algebra Tools Bart De Moor Katholieke Universiteit Leuven Department of Electrical Engineering ESAT/SCD 1 / 56

2 Outline 1 Introduction 2 History 3 Linear Algebra 4 Multivariate Polynomials 5 Applications 6 Conclusions 2 / 56

3 Why Linear Algebra? System Identification: PEM LTI models Non-convex optimization Considered solved early nineties Linear Algebra approach Subspace methods 3 / 56

4 Why Linear Algebra? Nonlinear regression, modelling and clustering Most regression, modelling and clustering problems are nonlinear when formulated in the input data space This requires nonlinear nonconvex optimization algorithms Linear Algebra approach Least Squares Support Vector Machines Kernel trick = projection of input data to a high-dimensional feature space Regression, modelling, clustering problem becomes a large scale linear algebra problem (set of linear equations, eigenvalue problem) 4 / 56

5 Why Linear Algebra? Nonlinear Polynomial Optimization Polynomial object function + polynomial constraints Non-convex Computer Algebra, Homotopy methods, Numerical Optimization Considered solved by mathematics community Linear Algebra Approach Linear Polynomial Algebra 5 / 56

6 Research on Three Levels Conceptual/Geometric Level Polynomial system solving is an eigenvalue problem! Row and Column Spaces: Ideal/Variety Row space/kernel of M, ranks and dimensions, nullspaces and orthogonality Geometrical: intersection of subspaces, angles between subspaces, Grassmann s theorem,... Numerical Linear Algebra Level Eigenvalue decompositions, SVDs,... Solving systems of equations (consistency, nb sols) QR decomposition and Gram-Schmidt algorithm Numerical Algorithms Level Modified Gram-Schmidt (numerical stability), GS from back to front Exploiting sparsity and Toeplitz structure (computational complexity O(n 2 ) vs O(n 3 )), FFT-like computations and convolutions,... Power method to find smallest eigenvalue (= minimizer of polynomial optimization problem) 6 / 56

7 Four instances of polynomial rooting problems p(λ) = det(a λi) = 0 (x 1)(x 3)(x 2) = 0 (x 2)(x 3) = 0 x 2 + 3y 2 15 = 0 y 3x 3 2x x 2 = 0 min x,y x 2 + y 2 s. t. y x 2 + 2x 1 = 0 7 / 56

8 Outline 1 Introduction 2 History 3 Linear Algebra 4 Multivariate Polynomials 5 Applications 6 Conclusions 8 / 56

9 Solving Polynomial Systems: a long and rich history... Diophantus (c200-c284) Arithmetica Al-Khwarizmi (c780-c850) Zhu Shijie (c1260-c1320) Jade Mirror of the Four Unknowns Pierre de Fermat (c ) René Descartes ( ) Isaac Newton ( ) Gottfried Wilhelm Leibniz ( ) 9 / 56

10 ... leading to Algebraic Geometry Etienne Bézout ( ) Jean-Victor Poncelet ( ) August Ferdinand Möbius ( ) Evariste Galois ( ) Arthur Cayley ( ) Leopold Kronecker ( ) Edmond Laguerre ( ) James Joseph Sylvester ( ) Francis Sowerby Macaulay ( ) David Hilbert ( ) 10 / 56

11 So Far: Emphasis on Symbolic Methods Computational Algebraic Geometry Emphasis on symbolic manipulations Computer algebra Huge body of literature in Algebraic Geometry Computational tools: Gröbner Bases (next slide) Wolfgang Gröbner ( ) Bruno Buchberger 11 / 56

12 So Far: Emphasis on Symbolic Methods Example: Gröbner basis Input system: x 2 y + 4xy 5y + 3 = 0 x 2 + 4xy + 8y 4x 10 = 0 Generates simpler but equivalent system (same roots) Symbolic eliminations and reductions Monomial ordering (e.g., lexicographic) Exponential complexity Numerical issues! Coefficients become very large Gröbner Basis: 9 126y + 647y 2 624y y 4 = y 6432y y x = 0 12 / 56

13 Outline 1 Introduction 2 History 3 Linear Algebra 4 Multivariate Polynomials 5 Applications 6 Conclusions 13 / 56

14 Homogeneous Linear Equations A p q X q (q r) = 0 p (q r) C(A T ) C(X) rank(a) = r dim N(A) = q r = rank(x) A = [ ] [ S U 1 U X = V 2 ] [ V T 1 V T 2 ] James Joseph Sylvester 14 / 56

15 Homogeneous Linear Equations A p q X q (q r) = 0 p (q r) Reorder columns of A and partition p q p (q r) p r A = [ ] A 1 A 2 rank(a 2 ) = r (A 2 full column rank) Reorder rows of X and partition accordingly [ ] [ q r ] X A1 A 1 2 X 2 q r r = 0 rank(a 2 ) = r rank(x 1 ) = q r 15 / 56

16 Dependent and Independent Variables [ ] [ q r ] X A1 A 1 2 X 2 X 1 : independent variables X 2 : dependent variables q r r = 0 X 2 = A 2 A 1 X 1 A 1 = 1 A 2 X 2 X 1 Number of different ways of choosing r linearly independent columns out of q columns (upper bound): ( ) q q! = q r (q r)! r! 16 / 56

17 Grassmann s Dimension Theorem A p q X q (q r A ) = 0 p (q r A ) and B p t Y t (t r B ) = 0 p (t r B ) What is the nullspace of [ A B ]? [ q r A t r B? ] X 0? [ A B ] 0 Y? = 0 Let rank([ A B ]) = r AB (q r A ) + (t r B )+? = (p + t) r AB? = r A + r B r AB 17 / 56

18 Grassmann s Dimension Theorem [ A B ] [ q r A t r B r A +r B r AB ] X 0 Z 1 0 Y Z 2 Intersection between column space of A and B: = 0 AZ 1 = BZ 2 A B #A #(A B) #B rab ra rb Hermann Grassmann ra + rb rab #(A B)=#A + #B #(A B) 18 / 56

19 Univariate Polynomials and Linear Algebra Characteristic Polynomial The eigenvalues of A are the roots of p(λ) = det(a λi) = 0 Companion Matrix Solving q(x) = 7x 3 2x 2 5x + 1 = 0 leads to /7 5/7 2/7 1 x x 2 = x 1 x x 2 19 / 56

20 Univariate Polynomials and Linear Algebra Consider the univariate equation x 3 + a 1 x 2 + a 2 x + a 3 = 0, having three distinct roots x 1, x 2 and x a 3 a 2 a a 3 a 2 a a 3 a 2 a x 1 x 2 x 3 x 2 1 x 2 2 x 2 3 x 3 1 x 3 2 x 3 3 x 4 1 x 4 2 x 4 3 x 5 1 x 5 2 x = Homogeneous linear system Rectangular Vandermonde corank = 3 Observability matrix-like Realization theory! 20 / 56

21 Two Univariate Polynomials Consider x 3 + a 1 x 2 + a 2 x + a 3 = 0 x 2 + b 1 x + b 2 = 0 Build the Sylvester Matrix: a 1 a 2 a a 1 a 2 a 3 1 b 1 b b 1 b b 1 b x x 2 x 3 x = 0 Row Space Ideal =union of ideals =multiply rows with powers of x Null Space Variety =intersection of null spaces Corank of Sylvester matrix = number of common zeros null space = intersection of null spaces of two Sylvester matrices common roots follow from realization theory in null space notice double Toeplitz-structure of Sylvester matrix 21 / 56

22 Two Univariate Polynomials Sylvester Resultant Consider two polynomials f(x) and g(x): f(x) = x 3 6x x 6 = (x 1)(x 2)(x 3) g(x) = x 2 + 5x 6 = (x 2)(x 3) Common roots iff S(f, g) = S(f, g) = det James Joseph Sylvester 22 / 56

23 Two Univariate Polynomials The corank of the Sylvester matrix is 2! Sylvester s construction can be understood from 1 x x 2 x 3 x 4 f(x) = x f(x) = x 1 x 2 g(x) = x 2 1 x 2 2 x g(x) = x 3 x 2 1 x 3 = 0 2 g(x) = x 4 1 x 4 2 where x 1 = 2 and x 2 = 3 are the common roots of f and g 23 / 56

24 Two Univariate Polynomials The vectors in the canonical kernel K obey a shift structure : 1 x x x 2 x = x 2 x 3 x 3 x 4 The canonical kernel K is not available directly, instead we compute Z, for which ZV = K. We now have S 1 KD = S 2 K S 1 ZV D = S 2 ZV leading to the generalized eigenvalue problem (S 2 Z)V = (S 1 Z)V D 24 / 56

25 Outline 1 Introduction 2 History 3 Linear Algebra 4 Multivariate Polynomials 5 Applications 6 Conclusions 25 / 56

26 Null space based Root-finding Consider j p(x, y) = x 2 + 3y 2 15 = 0 q(x, y) = y 3x 3 2x x 2 = 0 Fix a monomial order, e.g., 1 < x < y < x 2 < xy < y 2 < x 3 < x 2 y <... Construct M: write the system in matrix-vector notation: 1 x y x 2 2 xy y 2 x 3 x 2 y xy 2 y 3 3 p(x, y) q(x, y) x p(x, y) y p(x, y) / 56

27 Null space based Root-finding Continue to enlarge M: { p(x, y) = x 2 + 3y 2 15 = 0 q(x, y) = y 3x 3 2x x 2 = 0 it # form 1 x y x 2 xy y 2 x 3 x 2 y xy 2 y 3 x 4 x 3 yx 2 y 2 xy 3 y 4 x 5 x 4 yx 3 y 2 x 2 y 3 xy 4 y 5 p d = 3 xp yp q x 2 p xyp d = 4 y 2 p xq yq x 3 p x 2 yp xy 2 p d = 5 y 3 p x 2 q xyq y 2 q # rows grows faster than # cols overdetermined system rank deficient by construction! 27 / 56

28 Null space based Root-finding Coefficient matrix M: M = " Solutions generate vectors in kernel of M: Mk = 0 Number of solutions s follows from corank # Canonical nullspace K built from s solutions (x i, y i): x 1 x 2... x s y 1 y 2... y s x 2 1 x x 2 s x 1 y 1 x 2 y 2... x sy s y1 2 y ys 2 x 3 1 x x 3 s x 2 1 y 1 x 2 2 y 2... x 2 sy s x 1 y1 2 x 2 y x sys 2 y1 3 y ys 3 x 4 1 x x 4 4 x 3 1 y 1 x 3 2 y 2... x 3 sy s x 2 1 y2 1 x 2 2 y x 2 sys 2 x 1 y1 3 x 2 y x sys 3 y1 4 y ys / 56

29 Null space based Root-finding Choose s linear independent rows in K S 1K This corresponds to finding linear dependent columns in M x 1 x 2... x s y 1 y 2... y s x 2 1 x x 2 s x 1 y 1 x 2 y 2... x sy s y1 2 y ys 2 x 3 1 x x 3 s x 2 1 y 1 x 2 2 y 2... x 2 sy s x 1 y1 2 x 2 y x sys 2 y1 3 y ys 3 x 4 1 x x 4 4 x 3 1 y 1 x 3 2 y 2... x 3 sy s x 2 1 y2 1 x 2 2 y x 2 sys 2 x 1 y1 3 x 2 y x sys 3 y1 4 y ys / 56

30 Null space based Root-finding Shift property in monomial basis [ ] x = [ ] 1 x y x 2 xy y 2 1 x y x 2 xy y 2 y = [ [ Finding the x-roots: let D = diag(x 1, x 2,..., x s ), then S 1 KD = S 2 K, where S 1 and S 2 select rows from K wrt. shift property Reminiscent of Realization Theory ] ] 1 x y x 2 xy y 2 1 x y x 2 xy y 2 30 / 56

31 Null space based Root-finding Nullspace of M Find a basis for the nullspace of M using an SVD: [ ] [ M = Σ1 0 W T ] = [ X Y ] Z T Hence, We have MZ = 0 S 1 KD = S 2 K However, K is not known, instead a basis Z is computed as ZV = K Which leads to (S 2 Z)V = (S 1 Z)V D 31 / 56

32 Null space based Root-finding Algorithm 1 Fix a monomial ordering scheme 2 Construct coefficient matrix M 3 Compute basis for nullspace of M, Z 4 Find s linear independent rows in Z 5 Choose shift function, e.g., x 6 Write down shift relation in monomial basis k for the chosen shift function using row selection matrices S 1 and S 2 7 The construction of above gives rise to a generalized eigenvalue problem (S 2 Z)V = (S 1 Z)V D of which the eigenvalues correspond to the, e.g., x-solutions of the system of polynomial equations. 8 Reconstruct canonical kernel K = ZV 32 / 56

33 Data-driven Root-finding Data-Driven root-finding Dual version of Kernel-based root-finding All operations are done on coefficient matrix M Find linear dependent columns of M instead of linear independent rows of K (corank) Write down eigenvalue problem in terms of partitiong of M Allows sparse representation of M Rank-revealing QR instead of SVD 33 / 56

34 Data-driven Root-finding { p(x, y) = x 2 + 3y 2 15 = 0 q(x, y) = y 3x 3 2x x 2 = 0 Finding linear dependent columns of M 1 x y x 2 xy y 2 x 3 x 2 y xy 2 y 3... p xp yp q x 2 p 15 xyp 15 y 2 p 15 xq yq x 3 p 15 x 2 yp 15 xy 2 p 15 y 3 p 15 x 2 q xyq y 2 q / 56

35 Data-driven Root-finding { p(x, y) = x 2 + 3y 2 15 = 0 q(x, y) = y 3x 3 2x x 2 = 0 Writing down the eigenvalue problem in terms of a re-ordered partitioning of M all linear dependent columns of M corresponding with monomials of the lowest possible degree are grouped in M 1 M = [ ] = [M 1 M 2 ] [M 1 M 2 ] [ K1 K 2 ] = 0 ( : Moore-Penrose pseudoinverse) K 2 = M 2 M 1 K 1 35 / 56

36 Data-driven Root-finding { p(x, y) = x 2 + 3y 2 15 = 0 q(x, y) = y 3x 3 2x x 2 = 0 Writing down the eigenvalue problem in terms of a partitioning of M x 1 0 [ ] K 1... K1 = S x K 2 0 x s K 1 x x s [ = S x I tcr M 2 M 1 ] K 1 36 / 56

37 Complications There are 3 kinds of roots: 1 Roots in zero 2 Finite nonzero roots 3 Roots at infinity Applying Grassmann s Dimension theorem on the Kernel allows to write the following partitioning [ ] X1 0 X [M 1 M 2 ] 2 = 0 0 Y 1 Y 2 X 1 corresponds with the roots in zero (multiplicities included!) Y 1 corresponds with the roots at infinity (multiplicities included!) [X 2 ; Y 2 ] corresponds with the finite nonzero roots (multiplicities included!) 37 / 56

38 Complications Roots at infinity: univariate case 0 x 2 + x 2 = 0 transform x 1 X X(1 2X) = 0 1 affine root x = 2 (X = 1 2 ) 1 root at infinity x = (X = 0) Roots at infinity: multivariate case j (x 2)y = 0 y 3 = 0 transform x X T, y Y T 1 affine root (2,3,1) (T = 1) 1 root at infinity (1,0,0) (T = 0) j XY 2Y T = 0 Y 3T = 0 38 / 56

39 Multiplicities General Canonical null space K Multiplicities of roots multiplicity structure of kernel K Partial derivatives j1 j 2...j s j x 1 1 x j x js s 1 j 1!j 2!... j s! needed to describe extra columns of K Currently investigating technicalities j 1+j j s x j 1 1 x j x js s Possibility of trading in multiplicities for extra equations (Radical Ideal) 39 / 56

40 Multiplicities Univariate case f(x) = (x 1) 3 = 0 triple root in x = 1 : f (1) = 0 and f (1) = 0 f [ ] x 1 0 x 2 2x 1 x 3 3x 2 3x = 0 or 0 f 1 f 2 f x x 2 x 3 = 0 40 / 56

41 Multiplicities Multivariate case Polynomial system in 2 unknowns (x, y) with 1 affine root z 1 = (x 1, y 1 ) with multiplicity 3: [ 00 z1 10 z1 01 z1 ] 1 root z 2 = (x 2, y 2 ) at infinity: 00 z2 M matrix of degree 4 41 / 56

42 Multiplicities Canonical Kernel K K = 00 z1 10 z1 01 z1 00 z x y x 2 1 2x x 1 y 1 y 1 x 1 0 y y x 4 1 4x x 3 1 y 1 3x 2 1 y 1 x x 2 1 y2 1 2x 1 y1 2 2x 2 1 y 1 0 x 1 y1 3 y1 3 3x 1 y1 2 0 y y / 56

43 Polynomial Optimization Polynomial Optimization Problems If A 1 b = xb and A 2 b = yb then (A A 2 2)b = (x 2 + y 2 )b. (choose any polynomial objective function as an eigenvalue!) Polynomial optimization problems with a polynomial objective function and polynomial constraints can always be written as eigenvalue problems where we search for the minimal eigenvalue! Convexification of polynomial optimization problems 43 / 56

44 Outline 1 Introduction 2 History 3 Linear Algebra 4 Multivariate Polynomials 5 Applications 6 Conclusions 44 / 56

45 System Identification: Prediction Error Methods PEM System identification e H(q) Measured data {u k, y k } N k=1 Model structure u G(q) y y k = G(q)u k + H(q)e k Output prediction ŷ k = H 1 (q)g(q)u k + (1 H 1 )y k Model classes: ARX, ARMAX, OE, BJ A(q)y k = B(q)/F (q)u k +C(q)/D(q)e k Class ARX ARMAX OE BJ Polynomials A(q), B(q) A(q), B(q), C(q) B(q), F (q) B(q), C(q), D(q), F (q) 45 / 56

46 System Identification: Prediction Error Methods Minimize the prediction errors y ŷ, where ŷ k = H 1 (q)g(q)u k + (1 H 1 )y k, subject to the model equations Example ARMAX identification: G(q) = B(q)/A(q) and H(q) = C(q)/A(q), where A(q) = 1 + aq 1, B(q) = bq 1, C(q) = 1 + cq 1, N = 5 min ŷ,a,b,c (y 1 ŷ 1) (y 5 ŷ 5) 2 s. t. ŷ 5 cŷ 4 bu 4 (c a)y 4 = 0, ŷ 4 cŷ 3 bu 3 (c a)y 3 = 0, ŷ 3 cŷ 2 bu 2 (c a)y 2 = 0, ŷ 2 cŷ 1 bu 1 (c a)y 1 = 0, 46 / 56

47 Structured Total Least Squares Static Linear Modeling Dynamical Linear Modeling Rank deficiency minimization problem: min ˆ A b 2 F, s. t. (A + A)v = b + b, v T v = 1 Singular Value Decomposition: find (u, σ, v) which minimizes σ 2 Let M = ˆ A b 8 Mv = uσ >< M T u = vσ v >: T v = 1 u T u = 1 Rank deficiency minimization problem: min ˆ A b 2 F, s. t. (A + A)v = b + b, v T v = 1 ˆ A b structured Riemannian SVD: find (u, τ, v) which minimizes τ 2 8 >< >: Mv = D vuτ M T u = D uvτ v T v = 1 u T D vu = 1 (= v T D uv) 47 / 56

48 Structured Total Least Squares 3 min v τ 2 = v T M T Dv 1 Mv s. t. v T v = 1. phi STLS Hankel cost function TLS/SVD soln STSL/RiSVD/invit steps STLS/RiSVD/invit soln STLS/RiSVD/EIG global min STLS/RiSVD/EIG extrema theta method TLS/SVD STLS inv. it. STLS eig v v v τ global solution? no no yes 48 / 56

49 Maximum Likelihood Estimation CpG Islands genomic regions that contain a high frequency of sites where a cytosine (C) base is followed by a guanine (G) rare because of methylation of the C base hence CpG islands indicate functionality Given observed sequence of DNA: CTCACGTGATGAGAGCATTCTCAGA CCGTGACGCGTGTAGCAGCGGCTCA Problem Decide whether the observed sequence came from a CpG island 49 / 56

50 Maximum Likelihood Estimation The model 4-dimensional state space [m] = {A, C, G, T} Mixture model of 3 distributions on [m] 1 : CG rich DNA 2 : CG poor DNA 3 : CG neutral DNA Each distribution is characterised by probabilities of observing base A,C,G or T Table: Probabilities for each of the distributions (Durbin; Pachter & Sturmfels) DNA Type A C G T CG rich CG poor CG neutral / 56

51 Maximum Likelihood Estimation The probabilities of observing each of the bases A to T are given by p(a) = 0.10 θ θ p(c) = θ θ p(g) = θ θ p(t ) = 0.09 θ θ θ i is probability to sample from distribution i (θ 1 + θ 2 + θ 3 = 1) Maximum Likelihood Estimate: ( ˆθ 1, ˆθ 2, ˆθ 3) = arg max θ l(θ) where the log-likelihood l(θ) is given by l(θ) = 11 logp(a) + 14 logp(c) + 15 logp(g) + 10 logp(t ) Need to solve the following polynomial system 8 l(θ) < θ 1 = P 4 u i p(i) i=1 p(i) θ 1 = 0 : l(θ) θ 2 = P 4 u i p(i) i=1 p(i) θ 2 = 0 51 / 56

52 Maximum Likelihood Estimation Solving the Polynomial System corank(m) = 9 Reconstructed Kernel K = θ 1 θ 2 θ1 2 θ 1 θ 2.. θ i s are probabilities: 0 θ i 1 Could have introduced slack variables to impose this constraint! Only solution that satisfies this constraint is ˆθ = (0.52, 0.22, 0.26) 52 / 56

53 And Many More Applications are found in Polynomial Optimization Problems Structured Total Least Squares Model order reduction Analyzing identifiability nonlinear model structures Robotics: kinematic problems Computational Biology: conformation of molecules Algebraic Statistics Signal Processing / 56

54 Outline 1 Introduction 2 History 3 Linear Algebra 4 Multivariate Polynomials 5 Applications 6 Conclusions 54 / 56

55 Conclusions Finding roots of multivariate polynomials is linear algebra and realization theory! Finding minimizing zero of a polynomial optimization problem is extremal eigenvalue problem (Numerical) linear algebra/systems theory version of results in algebraic geometry/symbolic algebra (Gröbner bases, resultants, rings, ideals, varieties,... ) These relations in principle convexify /linearize many problems Algebraic geometry System identification (PEM) Numerical linear algebra (STLS, affine EVP Ax = xλ + a, etc.) Multilinear algebra (tensor least squares approximation problems) Algebraic statistics (HMM, Bayesian networks, discrete probabilities) Differential algebra (Glad/Ljung) Convexification occurs by projecting up to higher dimensional vector space (difficult in low number of dimensions; easy in high number of dimensions: an eigenvalue problem) Many challenges remain: Efficient construction of the eigenvalue problem - exploiting sparseness and structure Algorithms to find the minimizing solution directly (inverse power method) / 56

56 Questions? Bart De Moor ( ) Kim Batselier ( ) Philippe Dreesen ( ) At the end of the day, the only thing we really understand, is linear algebra. 56 / 56

Solving Systems of Polynomial Equations: Algebraic Geometry, Linear Algebra, and Tensors?

Solving Systems of Polynomial Equations: Algebraic Geometry, Linear Algebra, and Tensors? Solving Systems of Polynomial Equations: Algebraic Geometry, Linear Algebra, and Tensors? Philippe Dreesen Bart De Moor Katholieke Universiteit Leuven ESAT/SCD Workshop on Tensor Decompositions and Applications

More information

Structured Matrices and Solving Multivariate Polynomial Equations

Structured Matrices and Solving Multivariate Polynomial Equations Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days,

More information

BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY

BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY BACK TO THE ROOTS POLYNOMIAL SYSTEM SOLVING USING LINEAR ALGEBRA AND SYSTEM THEORY PHILIPPE DREESEN, KIM BATSELIER, AND BART DE MOOR Abstract. We return to the algebraic roots of the problem of finding

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Maximum Likelihood Estimation and Polynomial System Solving

Maximum Likelihood Estimation and Polynomial System Solving Maximum Likelihood Estimation and Polynomial System Solving Kim Batselier Philippe Dreesen Bart De Moor Department of Electrical Engineering (ESAT), SCD, Katholieke Universiteit Leuven /IBBT-KULeuven Future

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION

THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION THE GEOMETRY OF MULTIVARIATE POLYNOMIAL DIVISION AND ELIMINATION KIM BATSELIER, PHILIPPE DREESEN, AND BART DE MOOR Abstract. Multivariate polynomials are usually discussed in the framework of algebraic

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Linear Algebra problems

Linear Algebra problems Linear Algebra problems 1. Show that the set F = ({1, 0}, +,.) is a field where + and. are defined as 1+1=0, 0+0=0, 0+1=1+0=1, 0.0=0.1=1.0=0, 1.1=1.. Let X be a non-empty set and F be any field. Let X

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

Joint Regression and Linear Combination of Time Series for Optimal Prediction

Joint Regression and Linear Combination of Time Series for Optimal Prediction Joint Regression and Linear Combination of Time Series for Optimal Prediction Dries Geebelen 1, Kim Batselier 1, Philippe Dreesen 1, Marco Signoretto 1, Johan Suykens 1, Bart De Moor 1, Joos Vandewalle

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 MATH 167: APPLIED LINEAR ALGEBRA Chapter 2 Jesús De Loera, UC Davis February 1, 2012 General Linear Systems of Equations (2.2). Given a system of m equations and n unknowns. Now m n is OK! Apply elementary

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by

1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by Seminar 1 1. Which ones of the usual symbols of addition, subtraction, multiplication and division define an operation (composition law) on the numerical sets N, Z, Q, R, C? 2. Let A = {a 1, a 2, a 3 }.

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

The F 4 Algorithm. Dylan Peifer. 9 May Cornell University

The F 4 Algorithm. Dylan Peifer. 9 May Cornell University The F 4 Algorithm Dylan Peifer Cornell University 9 May 2017 Gröbner Bases History Gröbner bases were introduced in 1965 in the PhD thesis of Bruno Buchberger under Wolfgang Gröbner. Buchberger s algorithm

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of

More information

Fitting Linear Statistical Models to Data by Least Squares II: Weighted

Fitting Linear Statistical Models to Data by Least Squares II: Weighted Fitting Linear Statistical Models to Data by Least Squares II: Weighted Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling April 21, 2014 version

More information

Examples of numerics in commutative algebra and algebraic geo

Examples of numerics in commutative algebra and algebraic geo Examples of numerics in commutative algebra and algebraic geometry MCAAG - JuanFest Colorado State University May 16, 2016 Portions of this talk include joint work with: Sandra Di Rocco David Eklund Michael

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013 Math Exam Sample Problems Solution Guide October, Note that the following provides a guide to the solutions on the sample problems, but in some cases the complete solution would require more work or justification

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

Answer Keys For Math 225 Final Review Problem

Answer Keys For Math 225 Final Review Problem Answer Keys For Math Final Review Problem () For each of the following maps T, Determine whether T is a linear transformation. If T is a linear transformation, determine whether T is -, onto and/or bijective.

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Linear Algebra Fundamentals

Linear Algebra Fundamentals Linear Algebra Fundamentals It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators

A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators A Novel Linear Algebra Method for the Determination of Periodic Steady States of Nonlinear Oscillators Abstract Periodic steady-state (PSS) analysis of nonlinear oscillators has always been a challenging

More information

Pseudoinverse and Adjoint Operators

Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

A gentle introduction to Elimination Theory. March METU. Zafeirakis Zafeirakopoulos

A gentle introduction to Elimination Theory. March METU. Zafeirakis Zafeirakopoulos A gentle introduction to Elimination Theory March 2018 @ METU Zafeirakis Zafeirakopoulos Disclaimer Elimination theory is a very wide area of research. Z.Zafeirakopoulos 2 Disclaimer Elimination theory

More information

Linear vector spaces and subspaces.

Linear vector spaces and subspaces. Math 2051 W2008 Margo Kondratieva Week 1 Linear vector spaces and subspaces. Section 1.1 The notion of a linear vector space. For the purpose of these notes we regard (m 1)-matrices as m-dimensional vectors,

More information

Solution for Homework 5

Solution for Homework 5 Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 3, Friday 4 th October 26 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics for

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Ruppert matrix as subresultant mapping

Ruppert matrix as subresultant mapping Ruppert matrix as subresultant mapping Kosaku Nagasaka Kobe University JAPAN This presentation is powered by Mathematica 6. 09 : 29 : 35 16 Ruppert matrix as subresultant mapping Prev Next 2 CASC2007slideshow.nb

More information

MATH Spring 2011 Sample problems for Test 2: Solutions

MATH Spring 2011 Sample problems for Test 2: Solutions MATH 304 505 Spring 011 Sample problems for Test : Solutions Any problem may be altered or replaced by a different one! Problem 1 (15 pts) Let M, (R) denote the vector space of matrices with real entries

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii) . Which of the following are Vector Spaces? (i) V = { polynomials of the form q(t) = t 3 + at 2 + bt + c : a b c are real numbers} (ii) V = {at { 2 + b : a b are real numbers} } a (iii) V = : a 0 b is

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information