Some preconditioners for systems of linear inequalities

Size: px
Start display at page:

Download "Some preconditioners for systems of linear inequalities"

Transcription

1 Some preconditioners for systems of linear inequalities Javier Peña Vera oshchina Negar Soheili June 0, 03 Abstract We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different but related notions of conditioning for linear inequalities. Introduction Condition numbers play an important role in numerical analysis. The condition number of a problem is a key parameter in the computational complexity of iterative algorithms as well as in issues of numerical stability. A related challenge of paramount importance is to precondition a given problem instance, that is, perform some kind of data preprocessing to transform a given problem instance into an equivalent one with that is better conditioned. Preconditioning has been extensively studied in numerical linear algebra [0, 5] and is an integral part of the computational implementation of numerous algorithms for solving linear systems of equations. In the more general optimization context, the task of designing preconditioners has been studied by Epelman and Freund [6] as well as by Belloni and Freund [3]. In a similar spirit, we propose two simple preconditioning procedures for homogeneous systems of linear inequalities. As we formalize in the sequel, these procedures lead to improvements in three types of condition Tepper School of Business, Carnegie Mellon University, USA, jfp@andrew.cmu.edu Collaborative esearch Network, University of Ballarat, AUSTALIA, vroshchina@ballarat.edu.au Tepper School of Business, Carnegie Mellon University, USA, nsoheili@andrew.cmu.edu

2 numbers for homogeneous systems of linear inequalities, namely enegar s [], Goffin-Cucker-Cheung s [5, 9], and the Grassmann condition number [, 4]. Both enegar s and Goffin-Cheung-Chucker s condition numbers are key parameters in the analyses of algorithmic schemes and other numerical properties of constraint systems [7, 8, 4]. The more recently developed Grassmann condition number is especially well-suited for probabilistic analysis []. We recall the definition of the above condition numbers in Section below. These condition numbers quantify certain properties associated to the homogenous systems of inequalities () and () defined by a matrix A m n, where m n. enegar s condition number is defined in terms of the distance from A to a set of ill-posed instances in the space m n. Goffin-Cucker-Cheung s condition number is defined in terms of the best conditioned solution to the system defined by A. Alternatively, it can be seen as the reciprocal of a measure of thickness of the cone of feasible solutions to the system defined by A. Goffin- Cucker-Cheung s condition number is intrinsic to certain geometry in the column space of A. The more recent Grassmann condition number, introduced in a special case by Belloni and Freund [4] and subsequently generalized by Amelunxen-Burgisser [] is based on the projection distance from the linear subspace spanned by the rows of A to a set of ill-posed subspaces in n. The Grassmann condition number is intrinsic to certain geometry in the row space of A. The Goffin-Cucker-Cheung s condition number and the Grassmann condition numbers are respectively invariant under column scaling and elementary row operations on the matrix A respectively. For suitable choices of norms, each of them is also always smaller than enegar s condition number (see (4) and (5) below). We observe that these properties have an interesting parallel and naturally suggest two preconditioning procedures, namely column normalization and row balancing. Our main results, presented in Section 3, discuss some interesting properties of these two preconditioning procedures. In particular, we show that a combination of them would improve the values of the three condition numbers. Condition numbers and their relations Given a matrix A m n with m n, consider the homogeneous feasibility problem Ax = 0, x 0, x 0, () and its alternative A T y 0, y 0. () Let F P and F D denote the sets of matrices A such that () and () are respectively feasible. The sets F P and F D are closed and F P F D =

3 m n. The set Σ := F P F D is the set of ill-posed matrices. For a given A Σ, arbitrary small perturbations on A can lead to a change with respect to the feasibility of () and (). enegar s condition number is defined as C (A) := A min{ A A : A Σ}. Here A denotes the operator norm of A induced by a given choice of norms in n and m. When the Euclidean norms are used in both n and m, we shall write C, (A) for C (A). On the other hand, when the one-norm is used in n and the Euclidean norm is used in m we shall write C, (A) for C (A). These are the two cases we will consider in the sequel. Assume A = [ a a n ] m n with a i 0 for i =,..., n. Goffin-Cheung-Cucker s condition number is defined as C GCC (A) := min max y = i {,...,n} a i a T i y. Here denotes the Euclidean norm in m. The quantity /C GCC (A) is the Euclidean distance from the origin { to the boundary} of the convex hull of the set of normalized vectors i a a i, i =,..., n. Furthermore, when A F D, this distance coincides with the thickness of the cone of feasible solutions to (). Let Gr n,m denote the set of m-dimensional linear subspaces in n, that is, the m-dimensional Grassmann manifold in n. Let P m := {W Gr n,m : W n + {0}}, and D m := {W Gr n,m : W n + {0}}, Σ m = P m D m = { W Gr m,n : W n + {0}, W int( n +) = }. The sets P m, D m, and Σ m are analogous to the sets F P, F D, and Σ respectively: If A m n is full-rank then A F P span(a T ) P m and A F D span(a T ) D m. In particular, A Σ span(a T ) Σ m. The Grassmann condition number is defined as []: C Gr (A) := min{d(span(a T ), W ) : W Σ m }. Here d(w, W ) = σ max (Π W Π W ), where Π Wi denotes the orthogonal projection onto W i for i =,. In the above expression and in the 3

4 sequel, σ max (M) and σ min (M) denote the largest and smallest singular values of the matrix M respectively. C, We next recall some key properties of the condition numbers C,, C GCC and C Gr. Throughout the remaining part of the paper assume A = [ ] a a n m n is full-rank and a i 0 for i =,..., n. From the relationship between the one-norm and the Euclidean norm, it readily follows that C,, (A) C, n (A) nc, (A), (3) The following property is a consequence of [5, Theorem ]: max a i C GCC (A) C, (A) i=,...,n min a C GCC (A). (4) i i=,...,n The following property was established in [, Theorem.4]: C Gr (A) C, (A) σ max(a) σ min (A) C Gr(A). (5) The inequalities (4) and (5) reveal an interesting parallel between the pairs C GCC (A), C, (A) and C Gr(A), C, (A). This parallel becomes especially striking by observing that the fractions in the right hand sides of (4) and (5) can be written respectively as max a i i=,...,n min a = max{ Ax : x = } i min{ Ax : x = }, i=,...,n and σ max (A) σ min (A) = max{ AT y : y = } min{ A T y : y = }. 3 Preconditioning via normalization and balancing Inequalities (4) and (5) suggest two preconditioning procedures for improving enegar s condition number C (A). The first one is to normalize, that is, scale the columns of A so that the preconditioned matrix à satisfies max ã i = min ã i. The second one is to balance, i=,...,n i=,...,n that is, apply an orthogonalization procedure to the rows of A such as Gram-Schmidt or Q-factorization so that the preconditioned matrix à satisfies σ max (Ã) = σmin(ã). Observe that if the initial matrix A 4

5 is full rank and has non-zero columns, these properties are preserved by each of the above two preconditioning procedures. The following proposition formally states some properties of these two procedures. Proposition Assume A = [ a ] a n m n is full-rank and a i 0 for i =,..., n. (a) Let à be the matrix obtained after normalizing the columns of A, that is, ã i =, i =,..., n. Then ai a i C, (Ã) = CGCC(Ã) = C GCC(A). This transformation is optimal in the following sense C, (Ã) = min{c,(ad) : D is diagonal positive definite}. (b) Let à be the matrix obtained after balancing the rows of A, that is, à = Q T, where Q = A T with Q n m, m m is the Q-decomposition of A T. Then C, (Ã) = CGr(Ã) = C Gr(A). This transformation is optimal in the following sense C, (Ã) = min{c, (P A) : P m m is non-singular}. Proof: Parts (a) and (b) follow respectively from (4) and (5). Observe that the normalization procedure transforms the solutions to the original problem () when this system is feasible. More precisely, observe that Ax = 0, x 0 if and only if ÃD x = 0, D x 0, where D is the diagonal matrix whose diagonal entries are the norms of the columns of A. Thus a solution to the original problem () can be readily obtained from a solution to the column-normalized preconditioned problem: premultiply by D. At the same time, observe that the solution to () does not change when the columns of A are normalized. Similarly, the balancing procedure transforms the solutions to the original problem (). In this case A T y 0, y 0 if and only if à T y 0, y 0, where Q = A T is the Q-decomposition of A T. Hence a solution to the original problem () can be readily obtained from a solution to the row-balanced preconditioned problem: premultiply by. At the same time, observe that the solution to () does not change when the rows of A are balanced. We next present our main result concerning properties of the two combinations of normalization and balancing procedures. 5

6 Theorem Assume A = [ a ] a n m n is full-rank and a i 0 for i =,..., n. (a) Let  be the matrix obtained after normalizing the columns of A and then balancing the rows of the resulting matrix. Then C GCC (Â) C, n (Â) = CGr(Â) nc GCC (A). (6) (b) Let  be the matrix obtained after balancing the rows of A and then normalizing the columns of the resulting matrix. Then Proof: C Gr (Â) C, n (Â) = CGCC(Â) nc Gr (A). (7) (a) Let à be the matrix obtained by normalizing the columns of A. Proposition (a) yields C, (Ã) = C GCC(Ã) = C GCC(A). (8) Since  is obtained by balancing the rows of Ã, Proposition (b) yields (Â) = CGr(Â) = CGr(Ã) C,(Ã). (9) C, Inequality (6) now follows by combining (3), (4), (8), and (9). (b) The proof this part is essentially identical to that of part (a) after using the parallel roles of the pairs C GCC, C, and C Gr, C, apparent from (4), (5), and Proposition. Observe that both combined preconditioners in Theorem transform A to a  = P AD where D is a diagonal matrix and P is a square non-singular matrix. The particular P and D depend on what procedure is applied first. Hence each of the combined preconditioners transforms both sets of solutions to the original pair of problems () and (). In this case, solutions to the original problems can be readily obtained from solutions to the preconditioned problems via premultiplication by D for the solutions to () and by P T for the solutions to (). As discussed in [, Section 5], there is no general relationship between C Gr and C GCC, meaning that either one can be much larger than the other in a dimension-independent fashion. Theorem (a) shows that when C GCC (A) C Gr (A) column normalization followed by row balancing would reduce both C Gr and C, while not increasing C GCC beyond a factor of n. Likewise, Theorem (b) shows that when C Gr (A) C GCC (A) row balancing followed by column normalization 6

7 would reduce both C GCC and C, while not increasing C Gr beyond a factor of n. In other words, one of the two combinations of column normalization and by row balancing will always improve the values of all three condition numbers C GCC (A), C Gr (A) and C (A) modulo a factor of n. The following two questions concerning a potential strengthening of Thorem naturally arise: (i) Does the order of row balancing and column normalization matter in each part of Theorem? (ii) Are the leftmost and rightmost expressions in (6) and (7) within a small power of n? In other words, do the combined preconditioners make all C GCC (A), C Gr (A) and C (A) the same modulo a small power of n? The examples below, adapted from [, Section 5], show that the answers to these questions are yes and no respectively. Hence without further assumptions, the statement of Theorem cannot be strengthened along these lines. [ ] /ɛ Example Assume ɛ > 0 and let A =. It is easy to see 0 that C GCC (A) =. After balancing the rows of A and normalizing the columns of the resulting matrix we get  = ( + ɛ ) [ ( + ɛ ) ɛ ɛ 0 + ɛ + ɛ It is easy to show that C GCC (Â) = (+ɛ ) ɛ. Thus for ɛ > 0 sufficiently small, C GCC (Â) can be arbitrarily larger than C GCC(A). Therefore (6) does not hold for A, Â. [ ] ɛ Example Assume 0 < ɛ < / and let A =. Using [, Proposition.6], it can be shown that C Gr (A) = + ( + ɛ) 0 + ɛ. After normalizing the columns of A and balancing the rows of the resulting matrix, we obtain  = θδ θɛ ( + ɛ) θɛ + ( + ɛ) δ 0 + ( + ɛ). ( + ɛ) where δ = + 3 ( + ɛ) and θ = δ+ɛ. Using [, Proposition.6] again, it can be shown that C Gr (Â) = + δ ɛ. Therefore, C Gr (Â) can be arbitrarily larger than C Gr (A). Hence (7) does not hold for A, Â. ]. 7

8 [ ] + ɛ + ɛ + ɛ Example 3 Assume ɛ > 0 and let A =. In +(+ɛ) this case C GCC (A) = ɛ. After normalizing the columns of A and balancing the rows of the resulting matrix, we get [ ] β β Â = γ, γ + β γ γ β +(+ɛ) + ( ɛ). It is easy to see that where β = and γ = C GCC (Â) =. Therefore, C GCC (Â) can be arbitrarily smaller than C GCC (A) in (6). [ ] ɛ Example 4 Assume 0 < ɛ < / and let A =. In 0 this case C Gr (A) = + ɛ. After balancing the rows of A and normalizing the columns of the resulting matrix we get [ ] ( + ɛ ) Â = ( + ɛ ) 0. + ɛ + ɛ Using [, Proposition.6], it can be shown that C Gr (Â) = + ɛ. Thus C Gr (Â) can be arbitrarily smaller than C Gr(A) in (7). eferences [] D. Amelunxen and P. Bürgisser. A coordinate-free condition number for convex programming. SIAM Journal on Optimization, (3):09 04, 0. [] D. Amelunxen and P. Bürgisser. Probabilistic analysis of the Grassmann condition number. Technical report, University of Paderborn, Germany, 03. [3] A. Belloni and. Freund. A geometric analysis of renegars condition number, and its interplay with conic curvature. Math. Program., 9:95 07, 009. [4] A. Belloni and. Freund. Projective re-normalization for improving the behavior of a homogeneous conic linear system. Math. Program., 8:79 99, 009. [5] D. Cheung and F. Cucker. A new condition number for linear programming. Mathematical programming, 9():63 74, 00. [6] M. Epelman and. Freund. A new condition measure, preconditioners, and relations between different measures of conditioning for conic linear systems. SIAM J. on Optim., :67 655, 00. 8

9 [7] M. Epelman and. M. Freund. Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system. Math. Program., 88(3):45 485, 000. [8]. Freund and J. Vera. Condition-based complexity of convex optimization in conic linear form via the ellipsoid algorithm. SIAM J. on Optim., 0:55 76, 999. [9] J. L. Goffin. The relaxation method for solving systems of linear inequalities. Mathematics of Operations esearch, pages , 980. [0] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, third edition, 996. [] J. Peña and J. enegar. Computing approximate solutions for convex conic systems of constraints. Math Program., 87:35 383, 000. [] J. enegar. Incorporating condition measures into the complexity theory of linear programming. SIAM Journal on Optimization, 5(3):506 54, 995. [3] J. enegar. Linear programming, complexity theory and elementary functional analysis. Mathematical Programming, 70(-3):79 35, 995. [4] N. Soheili and J. Peña. A smooth perceptron algorithm. SIAM Journal on Optimization, ():78 737, 0. [5] L. Trefethen and D. Bau. Numerical Linear Algebra. SIAM,

A data-independent distance to infeasibility for linear conic systems

A data-independent distance to infeasibility for linear conic systems A data-independent distance to infeasibility for linear conic systems Javier Peña Vera Roshchina May 29, 2018 Abstract We offer a unified treatment of distinct measures of well-posedness for homogeneous

More information

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

On the von Neumann and Frank-Wolfe Algorithms with Away Steps On the von Neumann and Frank-Wolfe Algorithms with Away Steps Javier Peña Daniel Rodríguez Negar Soheili July 16, 015 Abstract The von Neumann algorithm is a simple coordinate-descent algorithm to determine

More information

A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature

A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature Alexandre Belloni and Robert M. Freund April, 007 Abstract For a conic linear system of the form Ax K, K a convex

More information

Journal of Complexity. On strata of degenerate polyhedral cones, II: Relations between condition measures

Journal of Complexity. On strata of degenerate polyhedral cones, II: Relations between condition measures Journal of Complexity 26 (200) 209 226 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jco On strata of degenerate polyhedral cones, II: Relations

More information

A Deterministic Rescaled Perceptron Algorithm

A Deterministic Rescaled Perceptron Algorithm A Deterministic Rescaled Perceptron Algorithm Javier Peña Negar Soheili June 5, 03 Abstract The perceptron algorithm is a simple iterative procedure for finding a point in a convex cone. At each iteration,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Solving Conic Systems via Projection and Rescaling

Solving Conic Systems via Projection and Rescaling Solving Conic Systems via Projection and Rescaling Javier Peña Negar Soheili December 26, 2016 Abstract We propose a simple projection and rescaling algorithm to solve the feasibility problem find x L

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

arxiv: v3 [math.oc] 25 Nov 2015

arxiv: v3 [math.oc] 25 Nov 2015 arxiv:1507.04073v3 [math.oc] 5 Nov 015 On the von Neumann and Frank-Wolfe Algorithms with Away Steps Javier Peña Daniel Rodríguez Negar Soheili October 14, 018 Abstract The von Neumann algorithm is a simple

More information

The SVD-Fundamental Theorem of Linear Algebra

The SVD-Fundamental Theorem of Linear Algebra Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Towards A Deeper Geometric, Analytic and Algorithmic Understanding of Margins

Towards A Deeper Geometric, Analytic and Algorithmic Understanding of Margins Towards A Deeper Geometric, Analytic and Algorithmic Understanding of Margins arxiv:1406.5311v1 [math.oc] 20 Jun 2014 Aaditya Ramdas Machine Learning Department Carnegie Mellon University aramdas@cs.cmu.edu

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View

More information

Row Space, Column Space, and Nullspace

Row Space, Column Space, and Nullspace Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014 Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

An algorithm to compute the Hoffman constant of a system of linear constraints

An algorithm to compute the Hoffman constant of a system of linear constraints An algorithm to compute the Hoffman constant of a system of linear constraints arxiv:804.0848v [math.oc] 23 Apr 208 Javier Peña Juan Vera Luis F. Zuluaga April 24, 208 Abstract We propose a combinatorial

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Rescaling Algorithms for Linear Programming Part I: Conic feasibility

Rescaling Algorithms for Linear Programming Part I: Conic feasibility Rescaling Algorithms for Linear Programming Part I: Conic feasibility Daniel Dadush dadush@cwi.nl László A. Végh l.vegh@lse.ac.uk Giacomo Zambelli g.zambelli@lse.ac.uk Abstract We propose simple polynomial-time

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

distances between objects of different dimensions

distances between objects of different dimensions distances between objects of different dimensions Lek-Heng Lim University of Chicago joint work with: Ke Ye (CAS) and Rodolphe Sepulchre (Cambridge) thanks: DARPA D15AP00109, NSF DMS-1209136, NSF IIS-1546413,

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont. Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

6. Orthogonality and Least-Squares

6. Orthogonality and Least-Squares Linear Algebra 6. Orthogonality and Least-Squares CSIE NCU 1 6. Orthogonality and Least-Squares 6.1 Inner product, length, and orthogonality. 2 6.2 Orthogonal sets... 8 6.3 Orthogonal projections... 13

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 16 1 / 21 Overview

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

DISSERTATION. Submitted in partial fulfillment ofthe requirements for the degree of

DISSERTATION. Submitted in partial fulfillment ofthe requirements for the degree of repper SCHOOL OF BUSINESS DISSERTATION Submitted in partial fulfillment ofthe requirements for the degree of DOCTOR OF PHILOSOPHY INDUSTRIAL ADMINISTRATION (OPERATIONS RESEARCH) Titled "ELEMENTARY ALGORITHMS

More information

Manning & Schuetze, FSNLP (c) 1999,2000

Manning & Schuetze, FSNLP (c) 1999,2000 558 15 Topics in Information Retrieval (15.10) y 4 3 2 1 0 0 1 2 3 4 5 6 7 8 Figure 15.7 An example of linear regression. The line y = 0.25x + 1 is the best least-squares fit for the four points (1,1),

More information

An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms

An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms Dan Li and Tamás Terlaky Department of Industrial and Systems Engineering, Lehigh University, USA ISE Technical

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS K. N. RAGHAVAN 1. Statement of the problem Imagine that between two nodes there is a network of electrical connections, as for example in the following picture

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 3, pp. 818 836 c 2001 Society for Industrial and Applied Mathematics CONDITION-MEASURE BOUNDS ON THE BEHAVIOR OF THE CENTRAL TRAJECTORY OF A SEMIDEFINITE PROGRAM MANUEL A. NUNEZ

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS MAT 217 Linear Algebra CREDIT HOURS: 4.0 EQUATED HOURS: 4.0 CLASS HOURS: 4.0 PREREQUISITE: PRE/COREQUISITE: MAT 210 Calculus I MAT 220 Calculus II RECOMMENDED

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist. MAE 9A / FALL 3 Maurício de Oliveira MIDTERM Instructions: You have 75 minutes This exam is open notes, books No computers, calculators, phones, etc There are 3 questions for a total of 45 points and bonus

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information