An Inverse Problem for the Matrix Schrödinger Equation

Similar documents
Properties of the Scattering Transform on the Real Line

Journal of Mathematical Analysis and Applications

Quantum Computing Lecture 2. Review of Linear Algebra

IMRN Inverse Spectral Analysis with Partial Information on the Potential III. Updating Boundary Conditions Rafael del Rio Fritz Gesztesy

COMPLETENESS THEOREM FOR THE DISSIPATIVE STURM-LIOUVILLE OPERATOR ON BOUNDED TIME SCALES. Hüseyin Tuna

CONSTRUCTION OF THE HALF-LINE POTENTIAL FROM THE JOST FUNCTION

arxiv:math/ v1 [math.sp] 18 Jan 2003

An inverse problem for the non-selfadjoint matrix Sturm Liouville equation on the half-line

On the Spectral Expansion Formula for a Class of Dirac Operators

Math 322. Spring 2015 Review Problems for Midterm 2

The following definition is fundamental.

Chapter 3 Transformations

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

ZEROS OF THE JOST FUNCTION FOR A CLASS OF EXPONENTIALLY DECAYING POTENTIALS. 1. Introduction

(3) Let Y be a totally bounded subset of a metric space X. Then the closure Y of Y

Inner product spaces. Layers of structure:

MATH 581D FINAL EXAM Autumn December 12, 2016

INVERSE NODAL PROBLEM FOR STURM-LIOUVILLE OPERATORS WITH COULOMB POTENTIAL M. Sat 1, E.S. Panakhov 2

Analysis Preliminary Exam Workshop: Hilbert Spaces

Remarks on the Rademacher-Menshov Theorem

Mathematical foundations - linear algebra

Functional Analysis Review

Trace formula for fourth order operators on the circle

2. Review of Linear Algebra

Review problems for MA 54, Fall 2004.

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Research Article A Half-Inverse Problem for Impulsive Dirac Operator with Discontinuous Coefficient

Sufficient conditions for functions to form Riesz bases in L 2 and applications to nonlinear boundary-value problems

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators

Inverse Nodal Problems for Second Order Differential Operators with a Regular Singularity

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Linear Algebra Massoud Malek

Solutions to Complex Analysis Prelims Ben Strasser

1. General Vector Spaces

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

On Bank-Laine functions

5 Compact linear operators

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

The Finite Spectrum of Sturm-Liouville Operator With δ-interactions

Journal of Computational and Applied Mathematics. Relations among eigenvalues of left-definite Sturm Liouville problems

Classes of Linear Operators Vol. I

Lecture 3: Review of Linear Algebra

A PERIODICITY PROBLEM FOR THE KORTEWEG DE VRIES AND STURM LIOUVILLE EQUATIONS. THEIR CONNECTION WITH ALGEBRAIC GEOMETRY

Lecture 3: Review of Linear Algebra

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Uniqueness of the potential function for the vectorial Sturm-Liouville equation on a finite interval

ON THE REMOVAL OF FINITE DISCRETE SPECTRUM BY COEFFICIENT STRIPPING

AMS 212A Applied Mathematical Methods I Appendices of Lecture 06 Copyright by Hongyun Wang, UCSC. ( ) cos2

Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005

Best approximation in the 2-norm

ON THE MULTIPLICITY OF EIGENVALUES OF A VECTORIAL STURM-LIOUVILLE DIFFERENTIAL EQUATION AND SOME RELATED SPECTRAL PROBLEMS

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30

REPRESENTATION THEORY WEEK 7

On the existence of an eigenvalue below the essential spectrum

arxiv:math-ph/ v1 27 Aug 2003

Olga Boyko, Olga Martinyuk, and Vyacheslav Pivovarchik

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

OPSF, Random Matrices and Riemann-Hilbert problems

On compact operators

INTEGRATION WORKSHOP 2003 COMPLEX ANALYSIS EXERCISES

An Example of Embedded Singular Continuous Spectrum for One-Dimensional Schrödinger Operators

arxiv: v2 [math.pr] 27 Oct 2015

Harnack s theorem for harmonic compact operator-valued functions

Finding eigenvalues for matrices acting on subspaces

Linear Algebra and Dirac Notation, Pt. 3

ON POISSON BRACKETS COMPATIBLE WITH ALGEBRAIC GEOMETRY AND KORTEWEG DE VRIES DYNAMICS ON THE SET OF FINITE-ZONE POTENTIALS

Complex symmetric operators

Interpolating the arithmetic geometric mean inequality and its operator version

Determinant of the Schrödinger Operator on a Metric Graph

THE CAYLEY-HAMILTON THEOREM AND INVERSE PROBLEMS FOR MULTIPARAMETER SYSTEMS

Math 4263 Homework Set 1

Linear Algebra Review. Vectors

A Brief Outline of Math 355

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Applied Mathematics Letters

The Hilbert Space of Random Variables

Complex Analysis, Stein and Shakarchi Meromorphic Functions and the Logarithm

TESTING HOLOMORPHY ON CURVES

are harmonic functions so by superposition

On the Regularized Trace of a Fourth Order. Regular Differential Equation

CESARO OPERATORS ON THE HARDY SPACES OF THE HALF-PLANE

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Trace Class Operators and Lidskii s Theorem

EXISTENCE OF SOLUTIONS FOR BOUNDARY VALUE PROBLEMS ON INFINITE INTERVALS. We consider the second order nonlinear differential equation

ABSOLUTELY CONTINUOUS SPECTRUM OF A TYPICAL SCHRÖDINGER OPERATOR WITH A SLOWLY DECAYING POTENTIAL

MTH 3102 Complex Variables Final Exam May 1, :30pm-5:30pm, Skurla Hall, Room 106

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

NORMS ON SPACE OF MATRICES

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016

Perturbations of functions of diagonalizable matrices

22 APPENDIX 1: MATH FACTS

ON COMPACTNESS OF THE DIFFERENCE OF COMPOSITION OPERATORS. C φ 2 e = lim sup w 1

BOUNDARY VALUE PROBLEMS IN KREĬN SPACES. Branko Ćurgus Western Washington University, USA

Geometric Aspects of Sturm-Liouville Problems I. Structures on Spaces of Boundary Conditions

The spectral zeta function

Characterization of half-radial matrices

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Transcription:

Journal of Mathematical Analysis and Applications 267, 564 575 (22) doi:1.16/jmaa.21.7792, available online at http://www.idealibrary.com on An Inverse Problem for the Matrix Schrödinger Equation Robert Carlson Department of Mathematics, University of Colorado, Colorado Springs, Colorado 8933 E-mail: carlson@math.uccs.edu Submitted by FritzGesztesy Received September 19, 21 By modifying and generalizing some old techniques of N.Levinson, a uniqueness theorem is established for an inverse problem related to periodic and Sturm Liouville boundary value problems for the matrix Schrödinger equation. 22 Elsevier Science (USA) Key Words: inverse Sturm Liouville problems; inverse problems with periodic potentials. 1.INTRODUCTION Classical inverse eigenvalue problems ask for the determination of the real-valued potential q x from the eigenvalues of one or more boundary value problems of the form y + q x y = λy ay +by = cy 1 +dy 1 = (1) or, alternatively, with the periodic boundary conditions y =y 1, y = y 1.These inverse eigenvalue problems found unanticipated application in the analysis of the Korteweg de Vries equation (KdV) and related soliton equations, particularly in the periodic case.some references for inverse eigenvalue problems are [1, 13, 14]. The Sturm Liouville problems (1), their periodic cousins, and KdV all have matrix versions.the eigenvalue equation becomes 22-247X/2 $35. 22 Elsevier Science (USA) All rights reserved. Y + Q x Y = λy Y K x 1 (2) 564

matrix schrödinger equation 565 where Q x is a K K self-adjoint matrix.although some aspects of the spectral theory related to (2) have been studied (see [1, 3 7, 15]), appropriate generalizations of the inverse eigenvalue problems have been missing. This work proposes such a generalization and provides a uniqueness theorem for the inverse problem. Define K K matrix solutions C x λ and S x λ of (2) by specifying the matrix version of the usual initial conditions C λ =I K C λ = K S λ = K S λ =I K Our spectral data will consist of the matrix functions C 1 λ and S 1 λ. Recall that when (2) is viewed as a periodic problem on the line with potential of period 1, the Floquet matrix representing translation by 1 on the vector space of solutions of (2) is ( ) C 1 λ S 1 λ C 1 λ S 1 λ so these data arise quite naturally in the question of whether the Floquet matrix determines the potential. The functions also arise naturally when considering the Sturm Liouville boundary conditions and (3) Y λ = k Y 1 λ = k (4) Y λ = k Y 1 λ = k (5) Equation (2) with boundary conditions (4) (respectively (5)) has an eigenvalue at λ if and only if det S 1 λ = (respectively det C 1 λ = ).In the scalar case there is a tighter linkage between the eigenvalues and the entire functions C 1 λ and S 1 λ since the Hadamard factorization theorem allows us to recover the functions, up to a scalar multiple, from their roots. The main result is the following uniqueness theorem. Theorem 1.1. If Q 1 x and Q 2 x are integrable self-adjoint K K matrix functions defined on 1 and if C 1 λ Q 1 =C 1 λ Q 2 and S 1 λ Q 1 =S 1 λ Q 2 for all λ, then Q 1 x =Q 2 x almost everywhere. The main ideas of the proof are borrowed from Levinson [9] who found an alternative method for establishing Borg s result [2] that the spectra of certain pairs of scalar Sturm Liouville problems were enough to uniquely

566 robert carlson determine the real integrable potential.the techniques involve eigenfunction expansions computed using contour integration of the resolvent. Estimates for solutions of (2) allow the development of unusual expansion formulas when the hypotheses of Theorem 1.1 are satisfied. A comparison of these expansion formulas leads to the conclusion that Q 1 = Q 2. Rather different techniques have been used in the scalar case to recover the potential q x from a column of the Floquet matrix [11].These techniques have been applied to first-order systems of equations in [12]. 2.ESTIMATES AND IDENTITIES We begin with some notational conventions.the function Q x is K K matrix valued, with integrable complex entries Q jk x L 1 1.For λ let ω = λ, where the square root is chosen continuously for π < arg λ π and positive for λ> unless otherwise stated.denote by ω the imaginary part of ω.a vector Y K is given the Euclidean norm [ ] K 1/2 y 1 Y = y k 2 Y = k=1 y K while a K K matrix Q is given the operator norm Q = sup QY Y =1 The K K identity and zero matrices are I K and K, respectively. This section extends some results related to (2) from the scalar case to the matrix case.estimates on the growth of solutions are considered first. Then, after establishing the Wronskian identity, formulas for the Green s function for (2) with Dirichlet boundary conditions are considered. 2.1. Growth of Solutions Solutions to the initial value problem for (2) may be estimated using techniques familiar from the scalar case.the model equation Y = λy has a basis of 2K solutions which are the columns of the K K diagonal matrix-valued functions cos ωx I K, ω 1 sin ωx I K.By using the variation of parameters formula, a solution of (2) satisfying Y λ =α, Y λ =β, with α β K, may be written as a solution of the integral equation Y x λ =cos ωx α + sin ωx ω β sin ω x t + Q t Y t λ dt (6) ω

Begin with the estimates matrix schrödinger equation 567 sin ωx cos ωx e ω x ω 1 sin ωx = cos ωt dt xe ω x In case β = and x 1, the integral equation (6) gives e ω x Y x λ α + By Gronwall s inequality ( e ω x Y x λ α exp Thus (6) implies that Y x λ cos ωx α α ω 1 e ω x Q t e ω t Y t λ dt ) Q t dt ( t Q t exp ) Q s ds dt ( ) ] = α ω 1 e [exp ω x Q t dt 1 There is a similar inequality for Y x λ when instead α =, and differentiation of (6) leads to inequalities for Y.The following result [9, 14] expresses these inequalities for the matrix functions C x λ and S x λ. Lemma 2.1. Let ( C Q x =K [exp 1/2 ) ] Q t dt 1 For x 1 the K K matrix solutions C x λ and S x λ of (2)satisfy C x λ cos ωx I k ω 1 e ω x C Q x C x λ +ω sin ωx I K e ω x C Q x S x λ ω 1 sin ωx I K ω 2 e ω x C Q x S x λ cos ωx I K ω 1 e ω x C Q x 2.2. Wronskian Identities The well-known Wronskian identity for pairs of solutions to (2) in the scalar case has an extension to the matrix case.given a K K matrix function Q x, suppose that Y 1 and Y 2 are K K matrix solutions of the equations Y 1 + Q x Y 1 = λy 1 Y 2 + Y 2Q x =λy 2

568 robert carlson Define the Wronskian to be Differentiation yields W Y 1 Y 2 =Y 2 Y 1 Y 2 Y 1 W Y 1 Y 2 = Y 2 Y 1 Y 2 Y 1 = Y 2 Q λ Y 1 Y 2 Q λ Y 1 = and so W Y 1 Y 2 is a constant K K matrix. Lemma 2.2. When Q x =Q x the following matrix identity holds, for all x 1 and all λ, ( )( ) ( ) S x λ S x λ C x λ S x λ IK C x λ C x λ C x λ S = K x λ K I K Proof. For this application the role of Y 1 will be played by the K K matrix functions C x λ and S x λ.with Q x =Q x their adjoints C x λ and S x λ satisfy Y1 + Y1 Q x = λy 1, so the functions C x λ and S x λ may be used for Y 2.The K K block entries of the product in the statement of the lemma are Wronskians, so are constants equal to their values at x =. 2.3. The Green s Function The proof of Theorem 1.1 will involve a close examination of the Green s function for the eigenvalue problem (2) with the Dirichlet boundary conditions Y = = Y 1.After casting the equation Y + Q x λi K Y = F x Y x λ F x K (7) as the first-order system ( ) ( Y1 Y 2 )( ) ( K I K Y1 K = Q x λi K K Y 2 F x ) variation of parameters gives ( ) ( ) Y α Y = x λ β ( ) + x λ 1 k t λ dt F t α β K where ( ) C x λ S x λ x λ = C x λ S x λ

matrix schrödinger equation 569 and, by Lemma 2.2, ( ) 1 S x λ = x λ S x λ C x λ C x λ This leads to the representation of the solution Y x λ of (7) satisfying Y = = Y 1 by means of the Green s operator Using the abbreviations Y x λ = R x t λ F t dt (8) C λ =C 1 λ S λ =S 1 λ the Green s function has the form R x t λ S x λ S = 1 λ C λ S t λ C x λ S t λ t x, S x λ S 1 λ C λ S t λ S x λ C t λ t x. The Green s function may be rewritten as (9) R x t λ S x λ S = 1 λ C x λ C 1 λ C λ S t λ t x, S x λ S 1 λ C λ S t λ S λ C t λ t x. The function Z 1 x λ = S x λ S 1 λ C x λ C 1 λ is a solution of (2) satisfying Z 1 1 λ = K.The function Z 2 t λ =C λ S t λ S λ C t λ is a solution of Y + YQ = λy.reversing the order of the matrix factors in Lemma 2.2 leads to the identity Z 2 1 λ = K.Introduce the matrix solutions U x λ and V x λ of (2) which satisfy U 1 λ =I K U 1 λ = K V 1 λ = K V 1 λ =I K Making use of these functions, there are then matrix functions E λ and E 1 λ such that V x λ E λ S R x t λ = t λ t x, S x λ E 1 λ V t λ t x. and The form of the Green s function (9) gives E λ =S 1 λ S 1 λ C λ C 1 λ E 1 λ =S 1 λ C λ S 1 λ C 1 λ

57 robert carlson From Lemma 2.2 we obtain V x λ =S x λ C λ C x λ S λ, and E 1 λ =S 1 λ.multiplying E λ by S λ and using Lemma 2.2 again give E λ = S 1 λ. Thus V x λ S R x t λ = 1 λ S t λ t x, S x λ S 1 λ V (1) t λ t x. The estimates of Lemma 2.1 show that S λ is invertible except for the discrete set of eigenvalues λ n for (2) subject to Y = = Y 1.From the formula (1) we may conclude that the Green s operator is a self-adjoint compact (Hilbert Schmidt) operator on K L 2 1 as long as λ is not an eigenvalue and, in fact, is the resolvent for the self-adjoint operator D 2 + Q.More details concerning such operators may be found in [8, pp.343 346]. 3.PROOF OF THE THEOREM Since the operator L = D 2 + Q with Dirichlet boundary conditions is self-adjoint with compact resolvent on n L 2 1, the function 1 R x t λ F t dt is meromorphic in the whole plane.the poles are at the eigenvalues λ n, and an expansion in eigenfunctions of L may be developed by expressing a contour integral of the resolvent in terms of residues. In particular, since the Dirichlet eigenvalues lie close to the values n 2 π 2, i F = lim R x t λ F t dt dλ (11) n 2π γn where the convergence is in n L 2 1 and γ n is a simple counterclockwise circular contour centered at with radius n + 1/2 2 π 2. At this point it is desirable to use subscripts to distinguish between solutions of (2) with coefficients Q 1 x and Q 2 x.the Green s function already constructed is assumed to be associated with Q 1 x.define a second kernel function R x t λ S1 x λ S = 1 λ C λ S2 t λ C 1 x λ S2 t λ t x, S 1 x λ S 1 λ C λ S2 t λ S 1 x λ C2 t (12) λ t x. As in the case of the Green s function, this kernel function may be written as V1 x λ S R x t λ = 1 λ S2 t λ t x, S 1 x λ S 1 λ V2 t (13) λ t x. As Levinson noted [9], the similarity of the growth estimates for the functions S 1 x λ and S 2 x λ leads to alternate forms of the eigenfunction expansion.

matrix schrödinger equation 571 Suppose that F K L 2 1. With the hypotheses of Theo- Lemma 3.1. rem 1 1, with convergence in K L 2 1. i F = lim R x t λ F t dt dλ (14) n 2π γn Proof. The difference of the kernels has the form R R x t λ =V 1 x λ S 1 λ [ S 1 t λ S 2 t λ ] t x and R R x t λ =S 1 x λ S 1 λ [ V 1 t λ V 2 t λ ] t x There is a corresponding decomposition with R R x t λ F t dt = I 1 x λ +I 2 x λ I 1 x λ = I 2 x λ = x R R x t λ F t dt R R x t λ F t dt We will concentrate on I 1 ; the estimates for I 2 are similar.lemma 2.1 gives S 1 t λ S 2 t λ C ω 2 e ω t V 1 x λ C ω 1 e ω 1 x (15) On the circles γ n we also have [14, p.27] so that, for n large enough, sin ω > exp ω /4 (16) S 1 λ C ω sin ω λ γ n Again, for n large enough, this gives 2 exp ω 1 x I 1 x λ C ω sin ω exp ω t F t dt λ γ n

572 robert carlson The Cauchy Schwarz inequality gives exp ω t F t dt F 2 [ ] 1/2 exp 2 ω t dt = F 2 exp 2 ω x 1 1/2 2 ω 1/2 exp ω x F 2 ω 1/2 When ω is small we will use the alternate estimate exp ω t F t dt exp ω x F t 2 Using the polar representation λ = r exp iθ for π <θ π on the circle γ n, we have ω = n + 1/2 sin θ/2.if sin θ/2 n 1/2, then ω n 1/2.Thus on γ n we have F 2 n exp ω t F t dt 1/4 exp ω x sin θ/2 n 1/2, K F 2 exp ω x sin θ/2 n 1/2. Combining these estimates with (16) shows that, for n sufficiently large, I 1 x λ C n + 1/2 2 n 1/4 F 2 sin θ/2 n 1/2, C n + 1/2 2 F 2 sin θ/2 n 1/2, λ γ n Since the radius of γ n is n + 1/2 2 π 2, lim n i I 2π 1 x λ dλ = γ n Together with similar estimates for I 2 x λ, this shows that lim n completing the proof. i R 2π γn R x t λ F t dt dλ = Since the matrix functions S λ and C λ have entire entries and det S λ is not identically, the matrix function S 1 λ C λ has meromorphic entries.we observed earlier that poles can only occur at the points λ n, the Dirichlet eigenvalues. Lemma 3.2. each λ n. The entries of S 1 λ C λ have poles of order at most 1 at

matrix schrödinger equation 573 Proof. Pick an orthonormal basis ψ n k of eigenfunctions with eigenvalue λ n.since R x t λ is the kernel of a compact resolvent on K L 2 1, we may write R x t λ = n k ψ n k x ψ n k t λ λ n (17) the series converging strongly as an operator on K L 2 1.If F and G are K K matrix functions whose columns are in K L 2 1, then denote by M n F G the K K matrix M n F G = k F x ψ n k x ψ n k t G t dt dx The functions C x λ S x λ and S x λ C x λ appearing in (9) are entire, so they do not contribute to the residue of the resolvent at λ n in the expansion (17).Thus M n F G = lim λ λ n λ λ n For ɛ> take = lim λ λ n λ λ n F x =G x = Since S 1 λ =I K we have F x F x R x t λ G t dt dx S 1 x λ S 1 λ C λ S 1 t λ G t dt dx IK x ɛ, K otherwise. S 1 x λ =I K x + o x uniformly for λ in a neighborhood of λ n.with these choices for F and G, we find that M n F G = lim λ λ n λ λ n ɛ ɛ C λ ti K + o t dt dx xi K + o x S 1 λ = ɛ 2 /2 2 I K + o 1 lim λ λ n λ λ n S 1 λ C λ as ɛ +.Since ɛ 2 /2 2 I K + o 1 is invertible, lim λ λn λ λ n S 1 λ C λ exists.

574robert carlson We now consider the evaluation of the contour integrals (11) of the resolvent and (14) of the modified resolvent by residues.the formulas (9) and (12) for the integral kernels show that the residues at λ n agree with those for the kernels S 1 x λ S 1 λ C λ S 1 t λ and S 1 x λ S 1 λ C λ S 2 t λ respectively. By the previous lemma, in a small neighborhood of λ n the matrix function S 1 λ C λ may be written in the form S 1 λ C λ =A λ λ n 1 + H λ where the K K matrix A is constant and H λ is analytic.for the singular part of the resolvent kernel at λ n, we have the alternate form ψ n k x ψ n k t λ λ n k where k indexes a set of orthonormal eigenfunctions with eigenvalue λ n.if σ n is a small simple counterclockwise contour about λ n, the range of the projection operator P n F i R x t λ F t dt dλ 2π σn is precisely the span of the eigenfunctions at λ n. Since the functions S 1 t λ and S 2 t λ have, for each λ, rows which are linearly independent functions of t, there are K K matrix functions B 1 t B 2 t such that S 1 t λ B 1 t dt = I K = This implies that the range of both operators S 2 t λ B 2 t dt i R x t λ F t dt dλ 2π σn and i R x t λ F t dt dλ 2π σn is the span of the columns of S 1 x λ n A.This span is the same as the span of the eigenfunctions ψ n k with eigenvalue λ n. Lemma 3.1 now implies that there are two eigenfunction expansions, coming from R and R, of the form F = c n k ψ n k = d n k ψ n k Since the eigenfunctions ψ n k are a complete orthonormal set, the coefficients must be the same.

matrix schrödinger equation 575 The residue calculation shows that the coefficients d n k may be computed by integration against linear combinations of the rows of S2 t λ.since the computed coefficients agree for all L 2 functions, it follows that every eigenfunction ψ n k of D 2 + Q 1 is in the span of the solutions S 2 t λ n and, in fact, is also an eigenfunction for D 2 + Q 2 (with Dirichlet boundary conditions).the formula (17) expressing the resolvent in terms of eigenfunctions now applies equally well for both D 2 + Q 2 and D 2 + Q 1,so these must be the same operators, and so Q 1 = Q 2 almost everywhere. This completes the proof of Theorem 1.1. REFERENCES 1. Z.Agranovich and V.Marchenko, The Inverse Problem of Scattering Theory. Gordon & Breach, New York, 1963. 2.G.Borg, Eine Umkehrung der Sturm Liouvilleschen Eigenwertaufgabe, Acta Math. 78 (1946), 1 96. 3.R.Carlson, Large eigenvalues and trace formulas for matrix Sturm Liouville problems, SIAM J. Math. Anal. 3 (1999), 949 962. 4.R.Carlson, Compactness of Floquet isospectral sets for the matrix Hill s equation, Proc. Amer. Math. Soc. 128, No.1 (2), 2933 2941. 5.R.Carlson, Eigenvalue estimates and trace formulas for the matrix Hill s equation, J. Differential Equations 167 (2), 211 244. 6.B.Després, The Borg theorem for the vectorial Hill s equation, Inverse Problems 11 (1995), 97 121. 7.F.Gesztesy and H.Holden, On trace formulas for Schrödinger-type operators, in Multiparticle Quantum Scattering with Applications to Nuclear, Atomic and Molecular Physics (D.G.Truhlar and B.Simon, Eds.), pp.121 145, Springer-Verlag, Berlin/New York, 1997. 8.T.Kato, Perturbation Theory for Linear Operators, Springer-Verlag, New York, 1995. 9.N.Levinson, The inverse Sturm Liouville problem, Mat. Tidsskr. B (1949), 25 3. 1.B.Levitan, Inverse Sturm Liouville Problems, VNU Science Press, Utrecht, 1987. 11.M.Malamud, Similarity of Volterra operators and related questions of the theory of differential equations of fractional order, Trans. Moscow Math. Soc. 55 (1994), 57 122. 12.M.Malamud, Uniqueness questions in inverse problems for systems of differential equations on a finite interval, Trans. Moscow Math. Soc. 6 (1999), 24 262. 13.V.Marchenko, Sturm Liouville Operators and Applications, Birkhäuser, Basel, 1986. 14.J.Pöschel and E.Trubowitz, Inverse Spectral Theory, Academic Press, Orlando, 1987. 15. M.Wadati and T.Kamijo, On the extension of inverse scattering method, Progr. Theoret. Phys. 52 (1974), 397 414.