On the Arithmetic and Shift Complexities of Inverting of Dierence Operator Matrices

Similar documents
On Unimodular Matrices of Difference Operators

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Fraction-free Row Reduction of Matrices of Skew Polynomials

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Definition 2.3. We define addition and multiplication of matrices as follows.

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

The definition of a vector space (V, +, )

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

LINEAR ALGEBRA: THEORY. Version: August 12,

is Use at most six elementary row operations. (Partial

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Algebra and Matrix Inversion

Chapter 3. Vector spaces

Math 344 Lecture # Linear Systems

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

LINEAR ALGEBRA REVIEW

Some generalizations of Abhyankar lemma. K.N.Ponomaryov. Abstract. ramied extensions of discretely valued elds for an arbitrary (not

LS.1 Review of Linear Algebra

Solving Systems of Equations Row Reduction

Hermite normal form: Computation and applications

1. Introduction to commutative rings and fields

Dimension. Eigenvalue and eigenvector

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

7.6 The Inverse of a Square Matrix

(II.B) Basis and dimension

Linear Algebra, Summer 2011, pt. 2

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Math 1553 Introduction to Linear Algebra

Cover Page. The handle holds various files of this Leiden University dissertation.

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

MATH 2030: ASSIGNMENT 4 SOLUTIONS

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2

Lesson U2.1 Study Guide

a11 a A = : a 21 a 22

x y + z = 3 2y z = 1 4x + y = 0

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

12. Perturbed Matrices

MODEL ANSWERS TO HWK #7. 1. Suppose that F is a field and that a and b are in F. Suppose that. Thus a = 0. It follows that F is an integral domain.

NOTES on LINEAR ALGEBRA 1

Background Mathematics (2/2) 1. David Barber

MODEL ANSWERS TO THE FIRST HOMEWORK

1.8 Dual Spaces (non-examinable)

Chapter 2. Matrix Arithmetic. Chapter 2

Linear algebra and applications to graphs Part 1

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

2018 Fall 2210Q Section 013 Midterm Exam II Solution

CHAPTER 6. Direct Methods for Solving Linear Systems

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ORIE 6300 Mathematical Programming I August 25, Recitation 1

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

Topic 1: Matrix diagonalization

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

MATH 583A REVIEW SESSION #1

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

Solving Ax = b w/ different b s: LU-Factorization

POLI270 - Linear Algebra

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

1 Groups Examples of Groups Things that are not groups Properties of Groups Rings and Fields Examples...

Linear Algebra. Min Yan

Linear Algebra Formulas. Ben Lee

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

11. Dimension. 96 Andreas Gathmann

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Chapter 2: Matrix Algebra

Linear Algebra Exam 1 Spring 2007

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Number of solutions of a system

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Matrix operations Linear Algebra with Computer Science Application

VII Selected Topics. 28 Matrix Operations

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

2 (17) Find non-trivial left and right ideals of the ring of 22 matrices over R. Show that there are no nontrivial two sided ideals. (18) State and pr

Polynomials, Ideals, and Gröbner Bases

Lectures on Linear Algebra for IT

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

ADVANCED COMMUTATIVE ALGEBRA: PROBLEM SETS

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Elementary maths for GMT

Where is matrix multiplication locally open?

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Matrices. 1 a a2 1 b b 2 1 c c π

Rings. EE 387, Notes 7, Handout #10

Rings, Integral Domains, and Fields

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

MAT Linear Algebra Collection of sample exams

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

The EG-family of algorithms and procedures for solving linear differential and difference higher-order systems

Lecture 9: Vector Algebra

Systems of Linear Equations

Spring 2014 Midterm 1 02/26/14 Lecturer: Jesus Martinez Garcia

Transcription:

On the Arithmetic and Shift Complexities of Inverting of Dierence Operator Matrices S. Abramov Moscow, Comp. Centre of RAS, MSU S. Abramov the arithmetic and shift complexities 1/12

Let K be a dierence eld of characteristic 0 with an automorphism (a shift) σ. A scalar linear dierence operator f is an element of K[σ, σ 1 ]. For a non-zero scalar operator f = a i σ i we dene its upper and lower orders: and its order ord f = max{i a i 0}, ord f = min{i a i 0} ord f = ord f ord f. Conventionally, ord 0 =, ord 0 =, ord 0 =. For a nite set F of scalar operators (e.g., for a vector, a matrix, a row of a matrix) we dene ord F as the maximum of the upper orders of its components), ord F as the minimum of the lower orders of the components and ord F as ord F ord F. S. Abramov the arithmetic and shift complexities 2/12

A dierence operator n n-matrix is a matrix belonging to Mat n (K[σ, σ 1 ]). In the sequel we will connect with a dierence operator matrix some matrices belonging to Mat n (K). To avoid a terminological confuses we will call operator matrices just operators. The case of scalar operators will be discussed separately. An operator is of full rank if its rows are linearly independent over K[σ, σ 1 ]. Any non-zero operator L Mat n (K[σ, σ 1 ]) can be represented as L = A l σ l + A l 1 σ l 1 + + A t σ t, (1) A l, A l 1,..., A t Mat n (K), the matrices A l, A t (the leading and the trailing matrices of L) are non-zero. L Mat n (K[σ, σ 1 ]), (2) S. Abramov the arithmetic and shift complexities 3/12

An operator L is invertible in Mat n (K[σ, σ 1 ]) and M Mat n (K[σ, σ 1 ]) is its inverse, if LM = ML = I n where I n is the unit n n-matrix. We write L 1 for the matrix M. Invertible operators are also called unimodular operators. S. Abramov the arithmetic and shift complexities 4/12

Example 1 (K = Q(x), σf (x) = f (x + 1)): ( ) σ 1 1 1 ) x 1 ( (x+1)2 x 2 2 x 2 σ + 1 = 2x σ 2 + σ 1 x σ x2 2 σ 1. (3) In the sequel, we will give some complexity analysis of the unimodularity testing for an operator matrix and of constructing the inverse matrix if it exists. S. Abramov the arithmetic and shift complexities 5/12

Let K be a dierence eld with an automorphism σ. We will say that a ring Λ which is a dierence extension of the eld K, is an adequate dierence extension of K, if Const (Λ) is a eld and for an arbitrary sytstem σy = Ay, y = (y 1,..., y n ) T, having a non-singular matrix A Mat n (K) the dimension of the linear over Const (Λ) space of solutions belonging Λ n is equal to n. For an arbitrary Λ the equality Const (Λ) = Const (K) is not guaranteed. In the general case Const (K) is a proper subeld of Const (Λ). Below, we denote by Λ a xed adequate dierence extension of a dierence eld K. S. Abramov the arithmetic and shift complexities 6/12

Besides the complexity as the number of arithmetic operations (the arithmetic complexity) one can consider the number of shifts in the worst case (the shift complexity). Thus we will consider two complexities. This is similar to the situation with sorting algorithms, when we consider separately the complexity as the number of comparisons and, resp., the number of swaps in the worst case. We introduce also the full algebraic complexity as the total number of all operations in the worst case. S. Abramov the arithmetic and shift complexities 7/12

The set of unimodular n n-operators will be denoted by Υ n. We use the notation V L for the space of solutions of the system L(y) = 0, belonging to Λ n. We will also consider V L as the space of solutions of operator L. Proposition 1 Let an operator L Mat n (K[σ, σ 1 ]) be of full rank. Then L Υ n V L = 0. S. Abramov the arithmetic and shift complexities 8/12

New algorithm Theorem 1 (i) For an operator L Mat n (K[σ, σ 1 ]) of full rank, there exists such U Υ n, that the operator L = UL has the following properties: (a) lower orders of all rows of L are equal to zero; (b) dim V L is equal to sum of the upper orders of the rows of the rows of the operator L; (c) ord U = O(nd). (ii) There exists an algorithm which checks whether a given operator L is of full rank or not, and if it is, then constructs an operator U described in (i). The arithmetic complexity of the algorithm is equal to O(n 4 d 2 ), the shift comlexity is equal to O(n 2 d 2 ). S. Abramov the arithmetic and shift complexities 9/12

As we know, L Υ n i dim V L = 0 (Proposition 1), and, by (a), (b), i L Mat n (K). To check whether an operator L is of full rank and L Mat n (K) (without constructing operators U and L) we have an algorithm whose arithmetic complexity is O(n 3 d 2 ), the shift complexity is O(n 2 d 2 ). This gives an algorithm for the unimodularity checking. If U and L are constructed by the algorithm mentioned in Theorem 1 (ii), and if L Mat n (K), then we compute L 1 = L 1 U. By (c), the computing of the product L 1 U does not change the estimates O(n 4 d 2 ), O(n 2 d 2 ) for the arithmetic and shift complexities. To construct the operators U, L an extended version of EG-eliminations is used. Our previous algorithm was organized similarly, but the constructing of the operators U, L was produced by another algorithms having the bigger shift complexity. S. Abramov the arithmetic and shift complexities 10/12

In addition, there is a question related to complexity of the matrix inverting for the case when matrix entries are scalar dierential operators over a dierential eld K : It is not clear, whether this problem can be reduced to the matrix multiplication problem as in the commutative case? One more question: whether there exists an algorithm for such n n-matrices inverting with complexity O(n a d b ), where d is the maximal order of dierential operators which are the matrix entries, with a < 3? S. Abramov the arithmetic and shift complexities 11/12

I think that it is possible to prove by the usual way that the matrix multiplication can be reduced to the problem of the matrix inverting (I have in mind the dierence matrices). However, it is not so easy to prove that the problem of the matrix inverting can be reduced to the problem of the the matrix multiplication (I think even that this is not correct, but cannot prove this). S. Abramov the arithmetic and shift complexities 12/12