A general framework for solving Riemann Hilbert problems numerically
|
|
- Janis Jacobs
- 5 years ago
- Views:
Transcription
1 A general framework for solving Riemann Hilbert problems numerically Sheehan Olver Oxford University Mathematical Institute St Giles Oxford, UK Abstract A new, numerical framework for the approximation of solutions to matrix-valued Riemann Hilbert problems is developed, based on a recent method for the homogeneous Painlevé II Riemann Hilbert problem. We demonstrate its effectiveness by computing solutions to other Painlevé transcendents. An implementation in Mathematica is made available online. Keywords Riemann Hilbert problems, singular integral equations, collocation methods, Painlevé transcendents.. Introduction The solution to integrable, nonlinear differential equations can typically be written as Riemann Hilbert RH) problems. Examples include the Painlevé I V transcendents [4], the Korteweg-de Vries KdV) equation and the nonlinear Schrödinger NLS) equation []. The importance of RH problems lies in the fact that they are often solvable asymptotically, which allows one to determine the asymptotics of solutions to the corresponding differential equation. This is accomplished using nonlinear steepest descent [9], a modification of a tool which is very familiar in asymptotic analysis: the method of steepest descent. In short, they can be viewed roughly as generalizations of integral representations. RH problems also play an increasingly crucial role in the theory of orthogonal polynomials and random matrix theory [8]. Just as in the theory of integrable systems, they are useful in the determination of asymptotics of orthogonal polynomials. But the behaviour of orthogonal polynomials is in turn directly related to the asymptotics of eigenvalues of large random matrices. This has led to new results of universality: random matrices with vastly differing behaviour have eigenvalue distributions which are the same as the dimension of the matrix becomes large. The reduction of a problem such as a differential equation) to an integral representation is traditionally viewed as solving the problem, precisely because asymptotics are then readily available. But there is another important reason why integral representations are
2 considered solutions: they can be used for numerics as well, through quadrature. The goal of this paper is to show that RH problems can also be used for numerics. Thus, reduction of a problem to a RH problem can be viewed, in a concrete sense, as a solution to the problem. Though much research exists on numerical solution of the nonlinear RH problems used in conformal mapping [29, 28, 30, 2]; the same is not true for the matrix-valued RH problems that are prevalent in modern applied analysis. The first use of matrix-valued RH problems as a numerical tool appears to be the thesis of Dienstfrey [0], which developed an approach for the computation of the RH problem connected with the sine kernel Fredholm determinant, of importance in random matrix theory. This approach was based on the reduction of the RH problem to a singular integral equation on, ) and the application of existing techniques [, 2, 6]. However, in this approach, an exponential amount of work at junction points of the domain was needed to achieve convergence, due to blow-up of the approximate solution at these points. This phenomenon was avoided in the development of an approach for the RH problem related to the homogeneous Painlevé II equation [24], by taking the behaviour at the junction point into account. In general, the RH problems arising in applications are over complicated contours, including multiple junction points. Unfortunately, the discretization of the Cauchy transform used in [0] as well as in other methods for singular integral equations, such as [8, 9, 2]) is inaccurate on such curves. Therefore, any method based on such a discretization must be restricted to RH problems whose jump contour is the unit interval. On the other hand, the discretization of the Cauchy transform in [24] is uniformly accurate in the complex plane, allowing for it to be used for complicated jump contours in this case, a contour consisting of six rays). We will generalize the approach of [24] in the construction of a framework applicable to a broad class of RH problems. We also extend the theory by proving conditions for convergence in Section 7. While convergence cannot a priori be guaranteed, it can, in practice, be verified a postiori. We apply this framework to computing Painlevé III and Painlevé IV transcendents, in Section 8. This approach could potentially provide the foundation for a toolbox for computing Painlevé transcendents reliably. Just as RH problems are the nonlinear analogue of integral representations, Painlevé transcendents can be viewed as nonlinear special functions. The computation of these functions is a major goal of the Painlevé project [4, 5, 6]. 2. Overview of the approach A RH problem is a boundary value problem of finding an analytic function which satisfies a given jump condition along an oriented contour on which analyticity is lost. More precisely: Problem 2. [20] Given an oriented contour Γ C and a jump matrix G : Γ C 2 2, find 2
3 I II III IV V Figure : The jump contours for the first five Painlevé transcendents. a function Φ : C\Γ C 2 2 which is analytic everywhere except on Γ such that Φ + z) = Φ z)gz) for z Γ and Φ ) = I, where Φ + denotes the limit of Φ as z approaches Γ from the left, and Φ denotes the limit of Φ as z approaches Γ from the right and Φ ) = lim z Φz). The jump contours Γ for the first five Painlevé transcendents are depicted in Figure, cf. [4]. Note that each jump contour consists of a union of pieces which have Möbius transformations to the unit interval: intervals, rays and arcs. Consider the Cauchy transform which is related to the Hilbert transform and Stieltjes transform) C Γ fz) = ft) 2iπ Γ t z dt. Normally, Γ is implied by f, so we suppress the dependence and use simply C. This operator maps a Hölder continuous function f defined on Γ to a function which is analytic everywhere in the complex plane except on Γ, and which vanishes at. Thus, as in [0, 24], we write Φ = CU + I, and Problem 2. becomes C + U C U)G = G I on Γ, 2.) where C ± denotes the left/right limits of the Cauchy transform. We can use this linear operator in the construction of a collocation method. Suppose we have a sequence of points x = x,..., x N ) on Γ and we wish to represent the solution by its values at these points U = U,..., U N ) U i are either matrices or row vectors, though what follows could be rewritten in terms of block matrices). Now if we have a scalar basis of functions represented as a row-vector) Ψz) = ψ z),..., ψ N z)) and a transform matrix 3
4 F from values at x to the coefficients of Ψ, then, at least conceptually, we can write the collocation system as [ C + Ψx) rdiag G)C Ψx) ] FU = diag G) I, 2.2) where G = Gx) = Gx ),..., Gx N )) and rdiag corresponds to matrix multiplication on the right: G A A G rdiag.. =.. G N A N A N G N To be precise, C ± ψ x ) C ± ψ N x ) C ± Ψx) = C ± ψ x N ) C ± ψ N x N ) Note that the rows of U are typically row-vectors or matrices while C ± Ψx) is a matrix whose entries are scalars. We treat these multiplications in the natural way: if a ij are scalars and b i are row vectors or matrices, then a a n b a n a nn b n = a b + + a n b n. a n b + + a nn b n Once we have computed the values U by solving 2.2), we obtain an approximation to Φ: Φz) Φz) = I + CΨz)U. The key piece in our framework is a method to compute the Cauchy transforms C ± Ψx), where x and Ψ are chosen appropriately. In [25] it was noted that the Cauchy transform can be computed numerically for Γ = I = [, ] uniformly throughout the complex plane, as well as the limits as z approaches Γ, using the fast discrete cosine transform DCT) and the Chebyshev basis. But this approach can be utilized for any curve with a Möbius transformation to the unit interval. Furthermore, C Γ Γ 2 = C Γ + C Γ2, Thus we can also efficiently compute the Cauchy transform over any Γ whose individual pieces have Möbius transformations to the unit interval, such as all the curves in Figure. This motivates the choice of x and Ψ as mapped Chebyshev points and polynomials, as discussed in Section 3. In Section 4 we write down in closed form an expression for the Cauchy transform of Chebyshev polynomials, which allows the computation of the matrix C ± Ψx). There is a critical snag: the points x contain junction points of the domain where the Cauchy transform 4
5 can blow up, except under conditions on U. Fortunately, we can resolve this issue in a manner that guarantees boundedness of C ± Ψx)U. It also means that the expression 2.2) is not the actual system we solve, but rather [ C + rdiag G)C ] U = diag G) I 2.3) for scalar) matrices C ± constructed in Section 5. Indeed, the presence of the junction points in x is crucial: it allows us to impose the conditions on U so that our approximation does not blow up. This along with the use of exact formulæ for the Cauchy transform in place of quadrature is the reason our approach avoids the difficulties seen in [0], where, by trying to avoid the junction point, exponential clustering near junction points was needed to simulate boundedness of the approximation. In Section 6 we describe properties of the approximate system. The surprising fact is that 2.3) is sufficient to ensure boundedness of the solution, under a weak condition on the behaviour of the jump matrix G at the junction points. In the rare case where this weak condition fails, 2.3) is automatically not of full rank, and we extend the linear system to enforce boundedness of the approximate solution. In Section 7, we prove a bound on the error of the numerical method, in terms of the norm of the inverse of the collocation matrix in 2.3). In Section 8 we provide numerical experiments which demonstrate the effectiveness of the approach. We compute solutions to the Painlevé III and Painlevé IV ODEs, as the jump contours of the associated RH problems are sufficiently complicated see Figure to demonstrate the flexibility of the new framework. A Mathematica implementation of the framework described in this paper is available online [23]. 3. Representation of jump matrices On the unit interval, we represent functions by their values at Chebyshev points of the second kind, x I = x I,n x I,n cos π =. ) = n x I,n.. cos π n n As touched on before, by choosing points which include the endpoints, we have the ability to enforce that our approximation to Φ is bounded. The natural basis is now Chebshev polynomials of the first kind: Ψ I z) = T 0 z),..., T n z)). 5
6 The DCT, under appropriate scalings, is an n n matrix F n = F such that Ǔ 0. Ǔ n = FU are the row vector/matrix valued) Chebyshev coefficients of the polynomial which takes the values U at x I : Ψ I x I) FU = n k=0 Ǔ k T k x I) = U. In other words, F = Ψ I x I) though both F and its inverse can be applied in On log n) time). The choice of Chebyshev polynomials is motivated by the fast transform and the explicit formulæ for their Cauchy transform described in the next section. We will represent functions on the components of Γ by transforming them to functions on the unit interval, via Möbius transformations. Example Möbius transformations are: Interval a, b) M [a,b] z) = a+b 2z a b Stetched ray M [a,e iθ ),L z) = a+eiθ L z a e iθ L z : Reverse stretched ray M e iθ,a],l z) = M [a,e iθ ),L z) : Arc M a+re i[θ,θ 2 ]z) = e 2 iθ +e ) 2 iθ 2 e i θ +θ ) 2 2 r z+a e 2 iθ e ) 2 iθ 2 e i θ +θ 2 2 r z a ) : Stretched interval M [a,b],l z) = al ) bl+)+2z al )+bl+) 2Lz : These maps can be best understood by how they relate to the three points of the unit interval, 0, ): M [a,b], 0, ) = a, b + a ) 2, b, M [a,e iθ ),L, 0, ) = a, a + e iθ L, ), M e iθ,a],l, 0, ) =, a + e iθ L, a ), M a+re i[θ,θ 2 ], 0, ) = a + re iθ, a + re i θ +θ 2 2, a + re iθ 2 M [a,b],l, 0, ) = a, b + a + L b a ) 2 2, b. 6 ),
7 Remark: In the examples below, we let the parameter L = in the stretched ray and will not use the stretched interval. However, in applying this approach to the RH problems from nonlinear steepest descent, this parameter is crucial to representing functions of the form e ωgz) as ω becomes large. If Γ is a bounded smooth curve such as an interval or arc) with a Möbius transformation M : Γ I, then we can represent functions using the mapped Chebyshev basis Ψ Γ z) = Ψ I Mz)) = T Γ 0 z),..., T Γ n z) ) = T 0 Mz)),..., T n Mz))), and the points x Γ = M x I). The transform from samples at x Γ is simply F Γ = F again: Ψ Γ x Γ )F Γ U = Ψ I x)fu = U. However, when Γ is unbounded hence M ) is in I, and for the Möbius transformations we consider always either or +) and f decays at infinity, we will need expansion in a basis which captures this fact; otherwise, the Cauchy transform of the basis is not well-defined. This is straightforward: fz) = ˇfk T k Mz)) = ˇfk T k Mz)) ˇfk T k M )) since f ) = 0) = ˇfk [T k Mz)) T k M ))] = ˇfk T Γ k z). In other words, this alternative choice of basis does not change the expansion coefficients. Thus we define the in this case n + column) basis as Ψ Γ z) = Ψ I Mz)) Ψ I M )) = T Γ 0 z),..., T Γ n z) ). Since we assume the function decays at infinity, we need only the finite points: if M ) =, then ) ) x Γ = M x I,n+ 2:n+ ) = M x I,n+ 2,..., x I,n+ n+, where we use the notation i : j to denote the ith through jth row. The n + ) n transform operator F Γ is F with its first column removed, or equivalently F Γ U = F n+ :n+,2:n+ U = F 0U ), where i : j, k : l denotes the ith through jth row and kth through lth column. If M ) = + then ) ) x Γ = M x I,n+ :n ) = M x I,n+,..., x I,n+ n 7
8 and F Γ U = F n+ :n+,:n U = F U0 ). What about functions which do not decay at infinity? For example, the original jump contour G must approach the identity. Fortunately, we never need to compute the Cauchy transform of any function which does not decay, nor its coefficients in a mapped Chebyshev basis; its values at the collocation points are sufficient. The jump contour Γ resulting from a RH problem is often a union of curves which have Möbius transformations to the unit interval: for Möbius transformations M Γ,..., M Γl, we have Γ = Γ Γ l = MΓ I) MΓ l I). We can thus divide our solution vector as U = U Γ. U Γl x Γ at the points x =.. We take the lengths of these vectors to be n Γ,..., n Γl, so that n Γ + + n Γl = N, though in what follows the length is usually left implicit. We only permit the constituent curves of Γ to overlap at their vertices; and these vertices are repeated in the vector x. For brevity of notation, we will sometimes use the index as a sub/superscript, so that: x Γ l M i = M Γi, U i = U Γi, x i = x Γ i and n i = n Γi. 4. Cauchy transforms over intervals, rays and arcs Because we represent each of the constituent domains of Γ by a Möbius transformation to the unit interval, the initial goal is to compute the Cauchy transform for a function defined on the unit interval. The Cauchy transform and Cauchy matrices over the unit interval The Jacouwski map T z) = 2 z + ) z maps the unit circle to the unit interval, with the interior and exterior of the circle both conformally mapped to the complex plane off the unit circle. We will need the following four inverses: Map to the interior Map to the exterior T+ x) = x x + x, T x) = x + x + x, 8
9 Map to the lower half circle Map to the upper half circle T x) = x i x + x, T x) = x + i x + x. We know the exact formula for the Cauchy transform of the Chebyshev basis: Theorem 4. [25, 24] Define ψ 0 z) = 2 iπ arctanh z, µ kz) = [ ψ k z) = z k ψ 0 z) 2 iπ = 2 iπ k+ 2 j= z 2j 2j, { µ k z) for k < 0 µ k /z) for k > 0 z +2 k/2 +k z 2 )+2 k/2 ) 2 F ], 3 + k/2 ; z 2 2 z 2 z k arctanh z arctanh z ) + zk 2 k+)/2 z 2 )+2 k+)/2 ) 2 F where 2 F is the hypergeometric function [22]. Then ), 3 + k+)/2 ; z 2 2 z 2 ) for k < 0 for k > 0, CT k x) = 2 [ ψk T + x)) + ψ k T + x)) ], CT k x) x 2iπ )k [log x ) log 2] + iπ )k [µ k ) + µ k )], CT k x) x and, for x I, C + T k x) = 2 C T k x) = 2 [logx ) log 2] + 2iπ iπ [µ k ) + µ k )], [ ψk T x)) + ψ k T x)) ] = 2 iπ T kx)arctanh T x) + iπ [ ψk T x)) + ψ k T x)) ] k+ 2 j= T k 2j+ x) 2j { k 2j + = 0 2 otherwise = 2 iπ T kx)arctanh T x) + iπ k+ 2 j= T k 2j+ x) 2j { k 2j + = 0 2 otherwise. The Cauchy transform over curves other than the unit interval On curves Γ other than the unit interval, we use a Möbius transformation M : Γ I to compute the Cauchy transform. The following theorem is trivially proved by considering the 9
10 RH formulation of the Cauchy transform which arises in Plemelj s lemma C + f C f = f and Cf ) = 0), cf. for example [25]. To simplify the statement of the theorem for unbounded curves, we use a slightly altered definition for Hölder continuous functions. Definition 4.2 A function f is α-hölder continuous on the unit interval if there exists a constants Λ so that, for any x and y in I, fx) fy) Λ x y α. We say that f is α-hölder continuous on Γ with a Möbius transformation M to the unit interval if f M is α-hölder continuous and f ) = 0 if Γ is unbounded). We say that f is Hölder continuous if there exists α > 0 so that f is α-hölder continuous. Theorem 4.3 Let M : Γ I be a Möbius transformation, and let gx) = fm x)). If f is Hölder continuous, then C Γ fz) = C I gmz)) C I gm )). In the case that Γ is bounded, we immediately obtain When Γ is unbounded we obtain where M ) = ±) CT Γ k z) = CT k Mz)) CT k M )). 4.) CT Γ k z) = C [T k T k ±)] Mz)) C [T k T k ±)] ±) = CT k Mz)) ±) k CT 0 Mz)) ±)k iπ [µ k ±) + µ k ±)]. 4.2) Remark: Theorem 4.3, and the construction of the framework, can be applied to a much broader class of conformal maps than only Möbius transformations. In fact, the map need not even be conformal: suppose p : I Γ is a d-degree polynomial with inverses p,..., p d ; then, for gx) = fpx)) we have provable using Plemelj s lemma) C Γ fz) = d k= C Γ gp k z)). For simplicity, however, we restrict our attention to Möbius transformations. Singularity data and the behaviour at endpoints When we stitch the curve Γ back together, we need to accurately know the behaviour of the Cauchy transform at the endpoints, where it typically blows up. We represent this using left and right singularity data S L f, S R f C 2 {z C : z = }. 0
11 We will use L/R in conjunction with ±, in which case L is identified with and R is identified with +. If z L/R = M ±) is the left/right endpoint of Γ and {a, r, s} = S L/R f, then the Cauchy transform of f behaves like Cfz) a + r log sz z L/R ) as z z L/R. From Theorem 4., we immediately see that S L T k = { a L k, r L k, } = S R T k = { a R k, r R k, } = { { ) k log 2 2iπ + )k [µ k ) + µ k )], )k iπ log 2 2iπ + iπ [µ k ) + µ k )], } 2iπ,. 2iπ, For general Γ, we again use the Möbius transformation. If Γ is bounded, then, since Mz) + M z L )z z L ), we have CT Γ k z) = CT k Mz)) CT k M )) a L k CT k M )) + rk L log [ M z L )z z L ) ] = a L k CT k M )) + rk L log M z L ) + rk L log [ e i arg M z L ) z z L ) ]. With a similar expression for the behaviour at z R, we find that S L/R T Γ k = { a L/R k CT k M )) + r L/R k log M z L/R ), r L/R If Γ is unbounded with M ) =, then, as z z L, Therefore CT Γ k z) = C[T k ]Mz)) C[T k ]) S L T Γ k = } k, ±e i arg M z L/R ) a L k a L 0 iπ [µ k) + µ k )] + rk L log [ M z L )z z L ) ] = a L k a L 0 iπ [µ k) + µ k )] + rk L log M z L ) +r L k log [ e i arg M z L ) z z L ) ]. { a L k a L 0 iπ [µ k) + µ k )] + rk L log M z L ) }, r L k, e i arg M z L ) },. 4.3) and we leave the right singularity data which corresponds to the behaviour at infinity) undefined. Likewise, if M ) = then S R T Γ k = { a R k ) k a R 0 )k iπ [µ k ) + µ k )] + rk R log M z R ) }, r R k, e i arg M z R )
12 and the left singularity data is undefined. We will use later that r L/R k is independent of the curve Γ, and s L/R k is always e iθ, where θ = arg M z L/R ) is the angle at which Γ leaves z L/R. Moreover, since r L/R k = ±) k+ 2iπ = ± T k±) 2iπ m = n Γ + otherwise) we have here m is the number of rows of F Γ : m = n Γ if Γ is bounded, ) F r L/R 0,..., r L/R m Γ U Γ = ± 2iπ e L/R U Γ, 4.4) i.e., ±2iπ) times the value at the corresponding endpoint. We use the notation e L = e to denote the basis vector of length n Γ corresponding to z L and e R = e nγ to denote the basis vector corresponding to z R. For z = z L/R + pe iθ, p > 0 and {a, r, s} = S L/R f, we have, for z z L/R, Cfz) a + r log spe iθ = a + ir arg se iθ + r log p. We define the finite part along angle θ as this expression with the logarithmic term thrown out: FP L/R θ f = a + ir arg se iθ. As θ approaches arg s, we have both a left and right limit. Thus we further define FP L ±f = a riπ and FP R ±f = a ± riπ 5. Constructing the Cauchy matrices We have established our representation of U as U, a vector of its values at the collocation points x. To construct the collocation system, we have to evaluate the Cauchy transform of U again at the points x. Thus we need to construct a matrix C ± that we write as C ± [Γ, Γ ] C[Γ, Γ l ] U C ± U = C[Γ l, Γ ] C ± [Γ l, Γ l ] U l Since C + C = I, we require that C + C = I, or, in particular, we define C [Γ i, Γ i ] as C + [Γ i, Γ i ] I. In other words, we only need to construct C +. We will sometimes simply refer to C + [Γ i, Γ j ], which is C + [Γ i, Γ j ] when i = j and C[Γ i, Γ j ] otherwise. The goal of this section is to compute each C + [Γ i, Γ j ]: an n Γj n Γi matrix which maps from values U i at the points x i to something related to the Cauchy transform at the points x j. 2
13 Matrix Definition Used For all Γ: C [Γ, Γ] C + [Γ, Γ] I For Γ = I: C + [I, I] Definition 5. C[I, Ω] with Ω disconnected from I Definition 5.3 C[I, Ω] with Ω connected to I Definition 5.5 For Γ bounded with M ) = : C + [Γ, Γ] Definition 5.6 C[Γ, Ω] with Ω disconnected from Γ Definition 5.7 C[Γ, Ω] with Ω connected to Γ Definition 5.8 For Γ bounded with M ) : C + [Γ, Γ] Definition 5.9 C[Γ, Ω] with Ω disconnected from Γ Definition 5.0 C[Γ, Ω] with Ω connected to Γ Definition 5. For Γ unbounded: C + [Γ, Γ] Definition 5.2 C[Γ, Ω] with Ω disconnected from Γ Definition 5.3 C[Γ, Ω] with Ω connected to Γ Definition 5.4 Table : Definitions for the Cauchy matrices. In the remainder of this section, we speak in terms of general domains Γ and Ω. We will assume attached to each domain is a vector of points x Γ and x Ω, with lengths n Γ and n Ω. Because these lengths are not necessarily the same, the matrix C[Γ, Ω] is rectangular and of dimensions n Ω n Γ. Note that we combine these rectangular matrices to construct the square matrix C +. We have several different situations to consider. definition is used to construct C + [Γ, Ω]. See Table for a guide for which Cauchy matrices for the unit interval Our first concern is to construct C + [I, I], essentially the process of computing the limit of the Cauchy transform of a function defined on I back on to itself. Recall from Theorem 4. 3
14 that C + T k x) = 2 iπ T kx)arctanh T x) + k+)/2 { T k 2j+ x) k 2j + = 0 iπ j= 2j 2 otherwise. The bounded term is a polynomial. For n = n I odd, we can write it in terms of the following n n almost Toeplitz matrix: P = n n The version for n odd is the Toeplitz matrix with its first row and column removed: P = n Now for the unbounded term, we note that we can simply evaluate the closed form expression for the Cauchy transform at x I with its endpoints removed: x I = x I 2:n. All that remains is the value at the endpoints. We use the finite part of the singularity data for this. From the definition of singularity data and 4.4), we find that FP L +Ψ I z)f = FP L +T 0,..., FP L +T 0 ) F = a L 0 r L 0 iπ,..., a L n r L n iπ ) F = [log 2 + iπ] r L 0,..., r L n ) F + µ0 ), µ 0 ) µ ),..., ) n µ n 2 ) + µ n )) ) F iπ = [ log 2 + iπ ] e L + iπ 2 2 iπ e L P FU. 4.
15 Likewise, FP R +Ψ I z)f = [ log 2 + iπ ] e R + iπ 2 2 iπ e RP FU. We thus obtain Definition 5. C + [I, I] = iπ log iπ 2 2diag arctanh T x I )) log iπ 2 + iπ F P F. Remark: If instead of constructing a matrix, we were concerned with evaluating the limit of the Cauchy transform from the left or right which together give the finite Hilbert transform), an important observation that was only briefly mentioned in [25] is that P is a Toeplitz matrix plus a vector that only depends on the first row. In other words, we can evaluate the finite Hilbert transform of a function defined at the n Chebyshev points x I at the same points in On log n) time. We now consider the Cauchy matrix C[I, Γ], for Γ not connected to I. If n = n I is the degree of the Chebyshev interpolation, we require ψ n T+ x Γ )),..., ψ n T+ x Γ )). Since computing hypergeometric functions is slow, we want to avoid their computation as much as possible. Fortunately from the first definition of ψ k, we have a simple recurrence relationship: { 2 ψ k z) = zψ k z) iπk for odd k 0 otherwise. For k > 0, we can first compute ψ 0 z) via the arctanh function, and can in turn compute ψ z), ψ 2 z),.... For k < 0, we cannot start with the arctanh function and use the recurrence relationship in reverse: each term is subtracting out the Taylor series of the arctanh function, hence round-off error occurs. Instead, we compute ψ n z) using the hypergeometric function representation, followed by the recurrence relationship again in the forward direction. This leads to the following algorithm: Algorithm 5.2 Recurrence relationship 5
16 Given z C m and n, compute the m n matrix Ψ z n as follows: ψ n = ψ n z), { 2 ψ k = zψ k iπk for odd k 0 otherwise ψ 0 = ψ 0 z), { 2 ψ k = zψ k iπk for odd k 0 otherwise for k = 2 n,...,, for k =,..., n, Ψ z n = 2 2ψ 0, ψ + ψ,..., ψ n + ψ n ). We can now successfully construct the necessary matrix: Definition 5.3 If Γ is not connected to I, then, for z = T + x Γ ) and n = n I, define C[I, Γ] = Ψ z nf. Now suppose the curve Γ is connected to the unit interval at one of its endpoints, for example the left endpoint of Γ is either ±: M Γ ±) =. We can determine Ψ z n, but now with all points in x Γ not connected to I: T+ ) x ) x Γ 2:nΓ = T Γ 2,..., x Γ nγ). Then we take the finite part of the remaining term, using the fact that the angle that Γ leaves ± is arg M Γ ±). To compute this, we will need the values µ 0 ±), ± [µ ±) + µ 0 ±)],..., ±) n [µ ±) + µ 0 ±)]. These are evaluated by simply summing the series term by term: Algorithm 5.4 Taylor series Given n, compute µ L/R n again, relating L with and R with +) as follows: µ 0 = 0, { ±) k µ k = µ k + k for odd k for k =,..., n 0 otherwise µ L/R n = µ0, ±[µ 0 + µ ],... ±) n [µ n 2 + µ n ] ). iπ + Definition 5.5 If M Γ ±) =, then, for θ = arg M Γ and r L/R = ) ) r L/R 0,..., r L/R ± n = 2iπ,..., ±)n, 2iπ 6 ) ±), z = T+ x Γ 2:nΓ, n = ni
17 define C[I, Γ] = µ L/R n + [ i arg ±e iθ) log 2 ] ) r L/R F. Ψ z n If M Γ ±) =, then, for θ = arg M Γ C[I, Γ] = µ L/R n ) ±)), z = T+ x Γ :nγ and n = ni, define Ψ z n + [ i arg ±e iθ) log 2 ] r L/R The construction in the case both endpoints are connected to I is also clear, though omitted. ) F. Curves Γ whose Möbius transformation leaves at We now consider the matrices C + [Γ, Ω] such that M Γ ) =. This naturally precludes the case where Γ is unbounded. From Theorem 4.3, we know that the Cauchy matrix is essentially unchanged, except for alterations of the singularity data. Thus we obtain the following constructions: Definition 5.6 If M Γ ) =, then, for z L = M Γ C + [Γ, Γ] = C + [I, I] + log M Γ z L) 2iπ ) and z R = M +), define Γ. log M Γ z R) Definition 5.7 If M Γ ) = and Γ and Ω are disconnected, then define C[Γ, Ω] = C[I, M Γ Ω)], where M Γ Ω) is the domain resulting from conformally mapping Ω using M Γ. Definition 5.8 Suppose M Γ ) =, and let z L = MΓ ) and z R = MΓ +). If z L = M ), then define Ω C[Γ, Ω] = C[I, M Γ Ω)] log M Γ z L) 2iπ, where the last matrix is the n Ω n Γ matrix with a one in the, ) entry and zeros elsewhere similar notation is used below). If z L = M +), then define Ω C[Γ, Ω] = C[I, MΩ)] log M Γ z L) 2iπ 7.
18 If z R = M ), then define Ω If z R = M +), then define Ω C[Γ, Ω] = C[I, M Γ Ω)] + log M Γ z R) 2iπ C[Γ, Ω] = C[I, M Γ Ω)] + log M Γ z R) 2iπ.. Bounded curves In this section, we assume that M Γ ), but where Γ is bounded. From 4.), we know we must subtract out the behaviour of the Cauchy transform of where is mapped to under M Γ, and alter the finite part at the endpoints. Definition 5.9 For z = T + M Γ )) and n = n Γ, define C + [Γ, Γ] = {Definition 5.6} n n diag Ψ z n)f, where k l is the k l matrix of all ones, and in this case Ψ z n is a simply a row vector. Definition 5.0 If Γ and Ω are disconnected, then, for z = T + M Γ )) and n = n Γ, define C[Γ, Ω] = C[I, M Γ Ω)] nω ndiag Ψ z n)f. Definition 5. If Γ and Ω are connected,then, for z = T + M Γ )) and n = n Γ, define C[Γ, Ω] = {Definition 5.8} nω ndiag Ψ z n)f. Unbounded curves We now consider the case where Γ is unbounded. In this case M ) is ±, hence Algorithm 5.2 cannot be used, since the arctanh function blows up. But we know the basis vanishes at, thus we simply take the finite part. Definition 5.2 If M Γ ) =, then, for z R = M Γ +), n = n Γ and m = n I = n +, define C + [Γ, Γ] = C + [I, I] 2:m,2:m n m diag µ L m)f Γ + log M Γ z R) 2iπ 8.
19 If M Γ ) = +, then, for z L = MΓ ), n = n Γ and m = n I = n +, define C + [Γ, Γ] = C + [I, I] :n,:n n m diag µ R m)f Γ log M Γ z L) 2iπ. Definition 5.3 Suppose Γ and Ω are disconnected and let n = n Γ and m = n I = n +. If M Γ ) =, then define C[Γ, Ω] = C[I, M Γ Ω)] :nω,2:m nω mdiag µ L m)f Γ. If M Γ ) = +, then define C[Γ, Ω] = C[I, M Γ Ω)] :nω,:n nω mdiag µ R m)f Γ. Definition 5.4 Suppose Γ and Ω are disconnected and let n = n Γ and m = n I = n +. If M Γ ) =, then define C[Γ, Ω] = {Definition 5.8} :nω,2:m n Ω mdiag µ L m)f Γ. If M Γ ) = +, then define C[Γ, Ω] = {Definition 5.8} :nω,:n n Ω mdiag µ R m)f Γ. 6. Properties of the approximate solution We have described how we construct the matrices C ±, thus we have everything necessary to construct the collocation system 2.3). Assuming it is nonsingular which is satisfied, at least, for G sufficiently close to the identity matrix we can solve this system to obtain U using a dense linear algebra package in our case, we use Mathematica s built-in LinearSolve, which is based on LAPACK). We thus obtain l Φz) = I + CΦz)FU = I + CΦ Γ k z)f Γ k U k, k= and each of these Cauchy transforms is computable, as described in Section 4. We must justify why Φ can be considered an approximation to Φ. One posible issue is that Φ is, as far as we know, unbounded. Another issue is that the values we assign for C ± at the junction points of Γ are, a priori, unrelated to the value of the Cauchy transform of our basis, which blows up. The key to justifying the approximation is the following condition, which, when satisfied, will resolve both issues: 9
20 Definition 6. Given ξ C, let {Ω,..., Ω L } be the subset of {Γ,..., Γ l } whose elements contain ξ as an endpoint. Define p i = M Ωi ξ) = ±, which determines if it is the left or right endpoint, and U i = e p i U Ωi, which is the value of the computed jump function at ξ using the convention that e = e ni ). The zero sum condition at ξ is satisfied if L p i U i = 0. i= The zero sum condition is satisfied if the zero sum condition at every endpoint of Γ,..., Γ l is satisfied. Lemma 6.2 Suppose that the zero sum condition is satisfied. Then Φz) = I + CΨz)FU is bounded for all z and lim C ± z x for z on Γ I + C ± Ψz)FU U =. lim z x l for z on Γ l I + C ±, Ψz)FU i.e. applying C ± is equivalent to computing the limit of the Cauchy transform of our approximation at the points x. Proof : The only possible blow-up of the Cauchy transform is at an endpoint ξ of the curves which make up Γ. We will use the notation of Definition 6., and let θ i = arg [ p i M Ω i ξ) ] be the angle at which Ω i leaves ξ. We have l Φz) = I + CΨz)FU = I + CΨ k z)f k U k. k= As z approaches ξ, we note that the Cauchy transform of any curve not connected to ξ must be bounded at ξ, hence L lim Φξ + ɛ) = D + lim CΨ Ω i ξ + ɛ)f Ω i U Ωi. ɛ 0 ɛ 0 i= Now each of these blows up at ξ, however, we know precisely the singularity data. Thus, for constants D i), depending only, possibly, on arg ɛ, we have CΨ Ω i ξ + ɛ)f Ω i U Ωi D ) + log p i e iθ i ɛ ) r p i 0,..., rp i n Ωi ) F Ω i U Ωi = D 2) + [ log ɛ + i arg p i e iθ i ɛ )] p i U i 2πi = D 3) arg ɛ + log ɛ p iu i 2πi 20 due to 4.4))
21 Thus we have Φξ + ɛ) D arg 4) log ɛ ɛ + 2πi L i= p i U i = D 4) arg ɛ, which proves that it is bounded at ξ. The value we assigned to the Cauchy matrices C ± [Γ i, Γ j ] at points which are not junction points was, by definition, the Cauchy transform. The value we chose at each endpoint was chosen to be the limit of the Cauchy transform, ignoring the unbounded term which grew like log z. We have shown that each of these unbounded terms is cancelled, therefore, the second part of the lemma follows. Q.E.D. What is clear from this lemma is that it is important that the zero sum condition is satisfied in order for our approximation to Φ to be bounded, hence a reasonable approximation to the actual Φ. Now we have not as of yet) imposed the zero sum condition; however, the following condition on the jump matrix G hence completely independent of the discretization), which should almost always be satisfied, will automatically imply that the zero sum condition is satisfied: Definition 6.3 Let {Ω,..., Ω L } denote the subset of {Γ,..., Γ l } whose elements contain ξ as an endpoint, θ i the angle at which Ω i leaves ξ, p i = M Ωi ξ) = ± and let G i be the limit of Gz) as z approaches ξ along Ω i. We assume that {Ω,..., Ω L } is sorted so that θ i is strictly increasing. The nonsingular junction condition at ξ is satisfied if the RH problem is well-posed at that point, i.e., G G L = I, and L θ + 2π θ L )I + θ i θ i )G i G L i=2 is nonsingular. The nonsingular junction condition is satisfied if the nonsingular junction condition at every endpoint of Γ,..., Γ l is satisfied. Theorem 6.4 Suppose that the collocation system has a solution and the nonsingular junction condition is satisfied. Then the computed vector U satisfies the zero sum condition and Lemma 6.2 applies. Proof : This theorem is a generalization of Theorem 4.2 from [24]. We reuse the notation of Lemma 6.2. For simplicity in notation, we assume every curve Ω,..., Ω L has ξ as a left endpoint; the generalization for other cases is straightforward. 2
22 Note that since θ i < θ j for i < j) the finite part of each U Ωi along Ω j is FP L/R θ j U Ωi = a i + ir i arg e iθ j θ i ) ) = a i + ir i { θj θ i π if i < j θ j θ i + π if i > j, where a i and r i are some constants. We thus define π θ θ 2 + π θ θ L + π θ θ L + π Θ ± θ 2 θ π π θ 2 θ 3 + π θ 2 θ L + π = θ L θ π θ L θ L 2 π π θ L θ L + π θ L θ π θ L θ 2 π θ L θ L π π. Let U i = e L U Ω i denote the value of U Ωi at ξ, and we note that r i = U i 2πi. Therefore, if we can show that L S = is zero, the zero sum condition at ξ follows. r i i= Let o i denote the index of the collocation point in x corresponding to ξ along Ω i ; in other words, o i is chosen so that e o i U = U i. For the vector r = r,..., r L ), We can write the L computed values corresponding to the limit of the Cauchy transform along each Ω i as Φ ± Φ ± =. Φ ± L = Ị. I + e o. e o L C± U = Ḍ. D + Θ ± r. The constant D consists of contributions which are independent of the angle at which ξ is approached: the contributions of U from curves not in {Ω,..., Ω L }, as well as the constants a i. Since e i+θ = e i Θ + + θ i+ θ i, for i =,..., L e Θ = e LΘ + + θ + 2π θ L we have Φ i+ = Φ+ i + θ i+ θ i )S, for i =,..., L Φ = Φ+ L + θ + 2π θ L )S. Furthermore, the collocation system implies that Φ + i = Φ i G i. Thus we get: Φ + L = Φ L G L = Φ + L G L + S [θ L θ L )G L ] 6.) 22
23 = Φ L G L G L + S [θ L θ L )G L ] = Φ + L 2 G L G L + S [θ L θ L 2 )G L G L + θ L θ L )G L ]. L = Φ G G L + S θ i θ i )G i G L i=2 L = Φ + L G G L + S[θ + 2π θ L )G G L + θ i θ i )G i G L ]. i=2 The well-posedness of the RH problem at ξ implies that: Hence we obtain G G L = I. 0 = S θ + 2π θ L )I + L θ i θ i )G i G L. i=2 By assumption the term in brackets is nonsingular, which implies that S = 0 and the zero sum condition at ξ follows. Applying this to each endpoint ξ proves the theorem. Q.E.D. This theorem could potentially have consequences for the analytical solution: what it states is that, generically, we need not impose that the solution to the RH problem is bounded at the junction points, since it follows from the nonsingular junction condition. The nonsingular junction condition is independent of the well-posedness of the RH problem, so if we are unfortunate we can find ourselves in a situation where it is not satisfied. Indeed, in [24], Stokes multipliers to the Painlevé II transcendent were chosen particularly so that this condition was not satisfied. What was noted in this case was that the collocation system itself became singular: it had multiple solutions. Therefore, the zero sum condition could be appended to the linear system, to enforce that the solution chosen satisfied the zero sum condition. The following corollary states that this phenomena is true in general: Corollary 6.5 Suppose the nonsingular junction condition is not satisfied at a junction ξ. Using the notation of the previous theorem, consider the collocation system with the condition Φ + L = Φ L G L replaced with L p i U i = 0. i= If the resulting system is nonsingular, then Φ + L = Φ L G L is still satisfied. Since the computed solution U satisfies the zero sum condition, Lemma 6.2 applies. 23
24 Proof : Consider 6.). We cannot begin with Φ + L ; we do not know a priori that Φ+ L = Φ L G L. However, the remaining equalities still hold, and we have Φ L G L = Φ + L + S θ + 2π θ L )I + L θ i θ i )G i G L = Φ + L, i=2 since S = 2πi Ui = 0. Q.E.D. 7. Convergence In this section, we present a bound on the error of the numerical method in terms of the norm of the inverse of the collocation method. Recall our non-standard definition for Hölder continuous, Definition 4.2. We take a similar approach for defining the necessary norms and spaces. Definition 7. We denote the supremum norm of a function f as f. For f defined on I, we denote the first Sobolev norm of f as f = f + f. For an operator L between two normed spaces, L denotes the standard operator norm. For a matrix L, L denotes the supremum matrix norm. Definition 7.2 Define H as the space of function which are Hölder continuous on each component of Γ. The norm attached to H is. Definition 7.3 Assume Γ = Γ Γ l where M i : Γ i I is a Möbius transformation. Define the space Z as the space of functions f on Γ which satisfy the zero sum condition, vanish at and where f i Mi is differentiable for i =,..., l, denoting the restriction of f to Γ i by f i. The norm attached to Z is sup i f i Mi. Lemma 7.4 The operators C ± map Z into H, and for these spaces, C ± is bounded. Proof : Assume that z M [, 0]). We define g i = f i Mi. Then for every Γ i not containing z L = M ), Hölder continuity of C Γ i fz) follows from its analyticity. Theorem 4.3 states that C Γi f i z) = C I g i M i z)) C I g i M i )) Note that for ζ away from I we have C I g i ζ) g i x) = 2π I x ζ dx π g i 24 max ζ M i M [,0])) ζ. 7.)
25 If Γ i is bounded, boundedness of C Γi fz) follows, since M i z) and M i ) are bounded away from I. If Γ i is unbounded, assume that M i ) =. For some ξ x,z x, z), we have C I g i M i )) g i x) = 2π I x + dx = 2π I g iξ x, ) dx g i. π Now let Γ,..., Γ L denote the pieces of Γ which contain z L, and for simplicity, assume z L is the left endpoint of each Γ i, i.e., M i z L ) =. We have C Γ fz) = C I [g ) g M z))]m z)) + g M z))c I M z)) C I g M )), C Γi fz) = C I [g i ) g i )]M i z)) + g i )C I M i z)) C I g i M i )) for i = 2,..., L. The boundedness of C I g i M i )) follows from 7.). Hölder continuity of the first terms follows from [20; p. 40 4]. Furthermore, we have C I [g ) g M z))]m z)) = g x) g M z)) dx 2π I x M z) π C I [g i ) g i )]M i z)) g i x) g i ) x + = 2π I x + x M i z) dx g i + max π M i z). The only remaining task is to show that z M [,0]) g, g M z))c I M z)) + g 2 )C I M 2 z)) + + g L )C I M L z)) is bounded and Hölder continuous for z M [, 0])). We write g M z)) = g ) + g M z)) g )). Since ζ + )C I ζ) is bounded for ζ = M z)), we have g ζ) g ))C I ζ) = g ξ 0,ζ )ζ + )C Γ ζ) g max ζ + )C Γ ζ). ζ [,0] This leaves us g )C I M z)) + g 2 )C I M 2 z)) + + g L )C I M L z)) 7.2) By the same logic as Lemma 6.2, the logarithmic terms in each C Γi M i z)) cancel, and hence this too is boundable. Hölder continuity of 7.2) follows from its smoothness. On the other hand, Hölder continuity of g ζ) g ))C Γ ζ) away from is obvious. At, we have g ζ) g ))C Γ z) g 2π )ζ + ) logζ + ) g ) 2π for any α > 0, showing Hölder continuity there. The same arguments hold for z M 0, ]) and for z on other Γ i. 25 ζ + α Q.E.D.
26 Theorem 7.5 Define L : Z H by which we have discretized as LU = C + U C U)G, L = C + rdiag G)C. Let Ū = Ux), where U is the exact solution to LU = G I. Then U ΨU [ + where M = max { n Γ,..., n Γ l}. Proof : + 4 π 2 log 2M + ) ) L L ] U ΨŪ, Define the operator corresponding to evaluation at the collocation points as PU = Ux), and recall that Ψx)U denotes interpolation at the collocation points by piecewise mapped Chebyshev polynomials. We write the interpolation operator also as Ψ, so that ΨU is the function Ψx)U. Because U satisfies the zero sum condition and ΨŪ interpolates U at all junction points, ΨŪ also satisfies the zero sum condition. By construction of L, we therefore have and hence On the other hand, LU = G I implies that PLΨŪ = LŪ, Ū = L PLΨŪ. We thus obtain Therefore, U = L PLU. U Ū = L PLU ΨŪ). since U ΨU = U Ψ Ū + ΨŪ U) + Ψ L L ) U Ψ Ū. P =. On the other hand, Ψ is simply interpolation by piecewise Chebyshev polynomials, and so its norm is precisely the maximum Lebesgue constant over each piece [5]: Ψ + 4 log 2M + ). π2 26 Q.E.D.
27 The solutions U we are interested in are smooth. In other words, U ΨŪ tends to zero spectrally faster than any inverse polynomial) as m = min { n Γ,..., n Γ l} tends to infinity. This motivates the following corollary: Corollary 7.6 Suppose max { n Γ,..., n Γ l} is proportional to min { n Γ,..., n Γ l}, as N = n Γ i algebraically. tends to infinity. Then U ΨU tends to zero spectrally if L grows at most In other words, while convergence cannot be a priori guaranteed, it can be verified by examining the growth of L. In the examples below and elsewhere), we remark that L appears to grow like Olog N) cf. Figure 4), in which case convergence is guaranteed. Remark: In this section, the differentiability of functions in Z could be relaxed to α-hölder continuity, with the norm depending on the Hölder constant, rather than the first derivative. We assumed differentiability, however, as the solutions U we are interested in are, in fact, smooth. 8. Examples We apply this numerical approach to two examples, taken directly from [4], though we make the construction of the RH problems explicit. We have also fixed several typos from [4]. In a way this helps to demonstrate another important aspect of having a numerical approach: it helps confirm and in this case, correct) analytic formulæ. Painlevé III Our first example is the Painlevé III RH problem, for solving the Painlevé III differential equation: d 2 u dx 2 = ) 2 du du u dx x dx + 4 Θ0 u 2 ) + Θ + 4u 3 4 x u. Related to u is the variable y, which satisfies dy dx = Θ x y + 2y u. What follows is the construction of a RH problem Φ + x; z) = Φ x; z)gx; z) so that y and thence u) is determined from Φ. We restrict our attention to the case where x is positive and real for simplicity: the contour Γ of the RH problem depends on the phase of x. 27
28 s 2 Rather than specifying initial conditions, we specify Stokes multipliers s 0, s0 2, s, which satisfy the equation 2 cos πθ 0 + s 0 s 0 2e iπθ 0 = 2 cos πθ + s s 2 e iπθ. and In [4; p ], two conflicting versions of this equation are stated. The equation presented here is indeed the correct one.) This equation ensures that the RH problem that is constructed is well-posed. Often in physical applications it is the Stokes multipliers which are known. What follows can be considered as a numerical map from Stokes multipliers to initial conditions. Moreover, this map could potentially be inverted to map initial conditions to Stokes multipliers. Since the Stokes multipliers determine the asymptotic behaviour of the solution u, this could be used to connect asymptotic behaviour to initial conditions. In this example, we choose arbitrarily) Θ 0 = 3.43, Θ =.23 s 0 =, s 0 2 = 2, s = 3, s 2 = ) ).57 + cos.23π sin.07π ) 3 = i. To construct the jump matrices, we need a matrix E that satisfies [4; p. 98 Eq )] + s s 2 s s 2 ) e iπθ e iπθ ) = E + s 0 s 0 2 s 0 s 0 2 ) e iπθ 0 e iπθ 0 ) E. Thus E can be viewed as a generalized eigenvector matrix. One choice found symbolically) is E = ) i i. We will also need to use a power function with a nonstandard branch cut. We define the power function with branch cut along angle t as z y,t = e i π t) z ) y e i π t)y. We can now construct the RH problem [4; p ]. The left graph of Figure 2 depicts the jump contour Γ, along which we define the jump matrices G i,j : for the function 28
29 Painlevé III G,3 G i, Painlevé IV G 3, G,0 G i, G i, G, G 4, G 2,0 G 3,2 G i, G, G i, G,4 G 4,2 G i, Figure 2: The jump contours for the Painlevé III with junction points at 0, e 5iπ/4, e iπ/4, e iπ/4, e 5iπ/4 and ) and Painlevé IV with junction points at, i,, i and ) RH problems. θx; z) = ix 2 z z ) we have ) ) G 3,2 = e θ z Θ 0/2 e θ z Θ E z Θ /2 e θ 0/2 z Θ /2eθ, G,3 = s e 2θ ) z Θ, G 0 4,0 = s 0 e 2θ z Θ ) 0, 0 e G,4 = θ z Θ 0/2, π/2 s 0 eθ z Θ ) 0/2, π/2 z Θ /2, π/2 e e θ z Θ E θ s z Θ /2, π/2 e θ ) 0/2, π/2 z Θ /2, π/2 e θ, ) ) G,2 = e θ z Θ /2 e θzθ /2 E z Θ0/2,0 e iπθ 0 θ s 0 2 z Θ0/2,0 e iπθ 0 θ z Θ0/2,0 e θ iπθ, 0 e G 3,4 = θ z Θ /2 s eθ z Θ /2 ) ) e θ z Θ /2 E z Θ0/2 e θ z Θ0/2eθ, ) ) G 2,0 = s 0 2 e 2θ z Θ 0,0 and G, = s 2 e 2θ z Θ, π/2. Along each curve, we need to decide at how many points to sample. We choose this adaptively: we take sufficiently many points so that the omitted terms of the Chebyshev expansion are each less than a given tolerance. We can now apply our numerical approach to determine an approximation Φ. However, we are really interested in the solution u to the Painlevé III ODE. We have the following 29
30 definition: yx) = ix lim z zφ 2x; z) = ix lim z z [CUx; z)] 2, where 2 denotes the, 2) entry of the matrix. In [4; p. 200], this was mistakenly given as a definition for ux).) But the limit of the Cauchy transform is simply an integral of the function: lim z zcuz) = 2πi Γ z Ut) dt = Ut) dt. t z 2πi Γ Moreover, we can evaluate this integral by transforming to the unit interval Γ Ut) dt = i Γ i U i t) dt = i U i Mi p)) M Mi p)) dp. Each of these integrals can be evaluated with Clenshaw Curtis quadrature [7]. This quadrature routine computes an integral over I in On log n) time using the value of the integrand at the nodes x I. In our case, the values are simply U i, with a zero appended in the case that Γ i is unbounded, multiplied entrywise by M Mi x I )). We define this approximation as Q Γi U i. We thus obtain the approximation and hence Γ Ut) dt QU = Q Γi U i, yx) x 2π QU. Now to find u, we will use the definition [4; p. 95 Eq )] u = 2xy xy x Θ y. However, to obtain u, we need to compute y x at x. Fortunately, derivatives commute with the limit, hence y x x) = i lim z zφ 2x; z) ix lim z zφ x,2x; z). We already have computed Φ 2, but we now also need the derivative of Φ with respect to x. This is straightforward by simply differentiating the RH problem: Φ + x Φ x G = Φ G x and Φ x ) = 0. If we write Φ x = ΨΦ and plug this into the formula above, we get Plemelj s lemma [20] implies that Ψ + Ψ = Φ G x [Φ + ] and Ψ ) = 0. Ψ = CΦ G x [Φ + ]. 30
31 Figure 3: The real solid line) and imaginary dotted line) parts of a solution to Painlevé III on the left, with its relative error in residual on the right. Thus we obtain ux) 2x [QU] 2 x [QΦ G x Φ + ) ] 2 + Θ ) [QU] 2, where the inverse operation in Φ + ) is taken to be entry-wise, Φ ± = I + C ± U and G x = G x x). In Figure 3, we plot the computed solution, and demonstrate that it does satisfy the Painlevé III ODE by finding the error in residual, computed by evaluating the approximation at 85 mapped Chebyshev points between 3 and 5. In this case seven digits of accuracy are about how many can be expected: the last Chebyshev coefficient is on the order of 0 0 and two derivatives should multiply the error by about Painlevé IV We now consider the Painlevé IV RH problem, for computing the solutions to the Painlevé IV ODE: d 2 u dx 2 = 2u ) 2 du + 3 dx 2 u3 + 4xu x 2 ) 8Θ 2 4Θ u u, where Θ and Θ are constants. Solutions are specified by the four Stokes multipliers s, s 2, s 3 and s 4, satisfying the relationship [4; p. 82 Eq. 5..9)] + s 2 s 3 )e 2iπΘ + [s s s 3 s 4 ) + s s 2 )] e 2iπΘ = 2 cos 2πΘ. Again, we choose these constants arbitrarily: Θ = 3.43, Θ =.23 s =, s 2 = 2, s 3 = 3, s 4 = i. 3
32 ÈÈL- ÈÈ Relative Error N N Figure 4: On the left, a plot of the absolute value of a solution to Painleve IV in the complex plane. In the middle, the relative error of the approximation for x = 0 solid), dashed), 2 dotted) and 3 dash and dotted). On the right, the growth of L for the same choices of x. We also need a matrix E, but in this case it is simply an eigenvector matrix where we happen to know the eigenvalues) [ 4; p. 82 Eq. 5..)] s s2 s3 s4 e2iπθ e2iπθ =E e 2iπΘ e2iπθ E. The right graph of Figure 2 depicts the jump contour Γ, along which we define the jump matrices Gi,j for θx; z) = G i, = G i, = Gi, = E e θ z Θ,0 eθ z Θ eθ z Θ + xz [ 4; p ]: e θ z Θ E e θ z Θ G, = z Θ e θ s z Θ e θ,!,, z Θ eθ z Θ eθ s2 eθ z Θ,0 e θ z Θ,0 and G,i =! E, z Θ,0 e θ z Θ,0 eθ s4 e2θ z 2Θ,0, s2 e2θ z 2Θ G, i = s2 z Θ,0 eθ + s s2 )z Θ,0 eθ s3 e 2θ z 2Θ,0 s e 2θ z 2Θ + s2 s3 )z Θ,0 e θ s + s3 + s s2 s3 )z Θ,0 e θ Θ θ E z e + s s2 )eθ z Θ,0 s e θ z0θ Gi, = G, = eθ z Θ,0 z2 2,. Again, we have everything in place for our numerical approach, choosing the number of collocation points on each curve adaptively. The relation between Φ and u is ux) = 2x z lim x log Φ2 x; z) = 2x z lim 32 Φx,2 x; z). Φ2 x; z)
A general framework for solving Riemann Hilbert problems numerically
Report no 0/5 A general framework for solving Riemann Hilbert problems numerically Sheehan Olver Abstract A new, numerical framework for the approximation of solutions to matrix-valued Riemann Hilbert
More informationNumerical solution of Riemann Hilbert problems: Painlevé II
Report no 09/9 Numerical solution of Riemann Hilbert problems: Painlevé II Sheehan Olver Abstract We describe a new, spectrally accurate method for solving matrixvalued Riemann Hilbert problems numerically
More informationNumerical solution of Riemann Hilbert problems: random matrix theory and orthogonal polynomials
Numerical solution of Riemann Hilbert problems: random matrix theory and orthogonal polynomials Sheehan Olver and Thomas Trogdon February, 013 Abstract Recently, a general approach for solving Riemann
More informationNUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford
NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e.,
More information2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1
Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear
More informationMath Homework 2
Math 73 Homework Due: September 8, 6 Suppose that f is holomorphic in a region Ω, ie an open connected set Prove that in any of the following cases (a) R(f) is constant; (b) I(f) is constant; (c) f is
More informationPhysics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory
Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationLECTURE-15 : LOGARITHMS AND COMPLEX POWERS
LECTURE-5 : LOGARITHMS AND COMPLEX POWERS VED V. DATAR The purpose of this lecture is twofold - first, to characterize domains on which a holomorphic logarithm can be defined, and second, to show that
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationMath 185 Fall 2015, Sample Final Exam Solutions
Math 185 Fall 2015, Sample Final Exam Solutions Nikhil Srivastava December 12, 2015 1. True or false: (a) If f is analytic in the annulus A = {z : 1 < z < 2} then there exist functions g and h such that
More informationSPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS
SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationANALYTIC SEMIGROUPS AND APPLICATIONS. 1. Introduction
ANALYTIC SEMIGROUPS AND APPLICATIONS KELLER VANDEBOGERT. Introduction Consider a Banach space X and let f : D X and u : G X, where D and G are real intervals. A is a bounded or unbounded linear operator
More informationREVIEW OF DIFFERENTIAL CALCULUS
REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationChange of variable formulæ for regularizing slowly decaying and oscillatory Cauchy and Hilbert transforms
Change of variable formulæ for regularizing slowly decaying and oscillatory Cauchy and Hilbert transforms Sheehan Olver November 11, 2013 Abstract Formulæ are derived for expressing Cauchy and Hilbert
More informationInfinite series, improper integrals, and Taylor series
Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions
More information1 Discussion on multi-valued functions
Week 3 notes, Math 7651 1 Discussion on multi-valued functions Log function : Note that if z is written in its polar representation: z = r e iθ, where r = z and θ = arg z, then log z log r + i θ + 2inπ
More informationMAT389 Fall 2016, Problem Set 11
MAT389 Fall 216, Problem Set 11 Improper integrals 11.1 In each of the following cases, establish the convergence of the given integral and calculate its value. i) x 2 x 2 + 1) 2 ii) x x 2 + 1)x 2 + 2x
More informationFinite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations
Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO
More information256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.
56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationSolutions to Complex Analysis Prelims Ben Strasser
Solutions to Complex Analysis Prelims Ben Strasser In preparation for the complex analysis prelim, I typed up solutions to some old exams. This document includes complete solutions to both exams in 23,
More informationSelf-adjoint extensions of symmetric operators
Self-adjoint extensions of symmetric operators Simon Wozny Proseminar on Linear Algebra WS216/217 Universität Konstanz Abstract In this handout we will first look at some basics about unbounded operators.
More informationF (z) =f(z). f(z) = a n (z z 0 ) n. F (z) = a n (z z 0 ) n
6 Chapter 2. CAUCHY S THEOREM AND ITS APPLICATIONS Theorem 5.6 (Schwarz reflection principle) Suppose that f is a holomorphic function in Ω + that extends continuously to I and such that f is real-valued
More informationComputation of equilibrium measures
Report no. 0/8 Computation of equilibrium measures Sheehan Olver Abstract We present a new way of computing equilibrium measures, based on the Riemann Hilbert formulation. For equilibrium measures whose
More informationVector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.
Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +
More information1 Review of simple harmonic oscillator
MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:
More informationTHEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)
4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M
More informationNonlinear steepest descent and the numerical solution of Riemann Hilbert problems
Nonlinear steepest descent and the numerical solution of Riemann Hilbert problems Sheehan Olver and Thomas Trogdon 2 School of Mathematics and Statistics The University of Sydney NSW 2006, Australia 2
More informationVectors in Function Spaces
Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also
More information2 Complex Functions and the Cauchy-Riemann Equations
2 Complex Functions and the Cauchy-Riemann Equations 2.1 Complex functions In one-variable calculus, we study functions f(x) of a real variable x. Likewise, in complex analysis, we study functions f(z)
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationSeparation of Variables in Linear PDE: One-Dimensional Problems
Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,
More informationProblem 1A. Use residues to compute. dx x
Problem 1A. A non-empty metric space X is said to be connected if it is not the union of two non-empty disjoint open subsets, and is said to be path-connected if for every two points a, b there is a continuous
More information1 Assignment 1: Nonlinear dynamics (due September
Assignment : Nonlinear dynamics (due September 4, 28). Consider the ordinary differential equation du/dt = cos(u). Sketch the equilibria and indicate by arrows the increase or decrease of the solutions.
More informationMathematical Methods for Physics and Engineering
Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory
More informationConnectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).
Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.
More information4 Uniform convergence
4 Uniform convergence In the last few sections we have seen several functions which have been defined via series or integrals. We now want to develop tools that will allow us to show that these functions
More informationAPPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE
APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE WILLIE WAI-YEUNG WONG. Introduction This set of notes is meant to describe some aspects of polynomial approximations to continuous
More informationPART IV Spectral Methods
PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,
More informationLecture 1 The complex plane. z ± w z + w.
Lecture 1 The complex plane Exercise 1.1. Show that the modulus obeys the triangle inequality z ± w z + w. This allows us to make the complex plane into a metric space, and thus to introduce topological
More informationDefinition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.
5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that
More informationAnalysis Comprehensive Exam, January 2011 Instructions: Do as many problems as you can. You should attempt to answer completely some questions in both
Analysis Comprehensive Exam, January 2011 Instructions: Do as many problems as you can. You should attempt to answer completely some questions in both real and complex analysis. You have 3 hours. Real
More informationSimple Examples on Rectangular Domains
84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation
More informationBOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET
BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski
More informationComplex numbers, the exponential function, and factorization over C
Complex numbers, the exponential function, and factorization over C 1 Complex Numbers Recall that for every non-zero real number x, its square x 2 = x x is always positive. Consequently, R does not contain
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationChapter 9. Analytic Continuation. 9.1 Analytic Continuation. For every complex problem, there is a solution that is simple, neat, and wrong.
Chapter 9 Analytic Continuation For every complex problem, there is a solution that is simple, neat, and wrong. - H. L. Mencken 9.1 Analytic Continuation Suppose there is a function, f 1 (z) that is analytic
More informationComplex Analysis for F2
Institutionen för Matematik KTH Stanislav Smirnov stas@math.kth.se Complex Analysis for F2 Projects September 2002 Suggested projects ask you to prove a few important and difficult theorems in complex
More informationare Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication
7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationChapter 3 Second Order Linear Equations
Partial Differential Equations (Math 3303) A Ë@ Õæ Aë áöß @. X. @ 2015-2014 ú GA JË@ É Ë@ Chapter 3 Second Order Linear Equations Second-order partial differential equations for an known function u(x,
More informationLECTURE-13 : GENERALIZED CAUCHY S THEOREM
LECTURE-3 : GENERALIZED CAUCHY S THEOREM VED V. DATAR The aim of this lecture to prove a general form of Cauchy s theorem applicable to multiply connected domains. We end with computations of some real
More informationChapter 30 MSMYP1 Further Complex Variable Theory
Chapter 30 MSMYP Further Complex Variable Theory (30.) Multifunctions A multifunction is a function that may take many values at the same point. Clearly such functions are problematic for an analytic study,
More informationComplex Analysis. Travis Dirle. December 4, 2016
Complex Analysis 2 Complex Analysis Travis Dirle December 4, 2016 2 Contents 1 Complex Numbers and Functions 1 2 Power Series 3 3 Analytic Functions 7 4 Logarithms and Branches 13 5 Complex Integration
More informationOn rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro
On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational
More informationAnalysis Qualifying Exam
Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationMultivariable Calculus
2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)
More informationLectures 9-10: Polynomial and piecewise polynomial interpolation
Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationHilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.
Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More information7. Baker-Campbell-Hausdorff formula
7. Baker-Campbell-Hausdorff formula 7.1. Formulation. Let G GL(n,R) be a matrix Lie group and let g = Lie(G). The exponential map is an analytic diffeomorphim of a neighborhood of 0 in g with a neighborhood
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationA proof for the full Fourier series on [ π, π] is given here.
niform convergence of Fourier series A smooth function on an interval [a, b] may be represented by a full, sine, or cosine Fourier series, and pointwise convergence can be achieved, except possibly at
More informationPartial Differential Equations
Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,
More informationNotes on Complex Analysis
Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................
More informationMATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES
MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES J. WONG (FALL 2017) What did we cover this week? Basic definitions: DEs, linear operators, homogeneous (linear) ODEs. Solution techniques for some classes
More informationNumerical computation of the finite-genus solutions of the Korteweg-de Vries equation via Riemann Hilbert problems
Numerical computation of the finite-genus solutions of the Korteweg-de Vries equation via Riemann Hilbert problems Thomas Trogdon 1 and Bernard Deconinck Department of Applied Mathematics University of
More informationMATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY
MATH 22: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY When discussing separation of variables, we noted that at the last step we need to express the inhomogeneous initial or boundary data as
More informationINVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia
INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS Quanlei Fang and Jingbo Xia Abstract. Suppose that {e k } is an orthonormal basis for a separable, infinite-dimensional Hilbert
More informationEuler Equations: local existence
Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u
More informationHere are brief notes about topics covered in class on complex numbers, focusing on what is not covered in the textbook.
Phys374, Spring 2008, Prof. Ted Jacobson Department of Physics, University of Maryland Complex numbers version 5/21/08 Here are brief notes about topics covered in class on complex numbers, focusing on
More informationSobolev Spaces. Chapter 10
Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p
More informationTOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :
TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted
More information1 Solutions to selected problems
Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p
More informationTaylor and Laurent Series
Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x
More informationPICARD S THEOREM STEFAN FRIEDL
PICARD S THEOREM STEFAN FRIEDL Abstract. We give a summary for the proof of Picard s Theorem. The proof is for the most part an excerpt of [F]. 1. Introduction Definition. Let U C be an open subset. A
More informationTHE RESIDUE THEOREM. f(z) dz = 2πi res z=z0 f(z). C
THE RESIDUE THEOREM ontents 1. The Residue Formula 1 2. Applications and corollaries of the residue formula 2 3. ontour integration over more general curves 5 4. Defining the logarithm 7 Now that we have
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationLast Update: April 7, 201 0
M ath E S W inter Last Update: April 7, Introduction to Partial Differential Equations Disclaimer: his lecture note tries to provide an alternative approach to the material in Sections.. 5 in the textbook.
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationConsidering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.
Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and
More informationLet X be a topological space. We want it to look locally like C. So we make the following definition.
February 17, 2010 1 Riemann surfaces 1.1 Definitions and examples Let X be a topological space. We want it to look locally like C. So we make the following definition. Definition 1. A complex chart on
More information7.2 Conformal mappings
7.2 Conformal mappings Let f be an analytic function. At points where f (z) 0 such a map has the remarkable property that it is conformal. This means that angle is preserved (in the sense that any 2 smooth
More informationMathematical Foundations
Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for
More information4 Divergence theorem and its consequences
Tel Aviv University, 205/6 Analysis-IV 65 4 Divergence theorem and its consequences 4a Divergence and flux................. 65 4b Piecewise smooth case............... 67 4c Divergence of gradient: Laplacian........
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationMS 3011 Exercises. December 11, 2013
MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding
More informationON THE ENERGY DECAY OF TWO COUPLED STRINGS THROUGH A JOINT DAMPER
Journal of Sound and Vibration (997) 203(3), 447 455 ON THE ENERGY DECAY OF TWO COUPLED STRINGS THROUGH A JOINT DAMPER Department of Mechanical and Automation Engineering, The Chinese University of Hong
More informationHamiltonian partial differential equations and Painlevé transcendents
The 6th TIMS-OCAMI-WASEDA Joint International Workshop on Integrable Systems and Mathematical Physics March 22-26, 2014 Hamiltonian partial differential equations and Painlevé transcendents Boris DUBROVIN
More informationAnalysis II: The Implicit and Inverse Function Theorems
Analysis II: The Implicit and Inverse Function Theorems Jesse Ratzkin November 17, 2009 Let f : R n R m be C 1. When is the zero set Z = {x R n : f(x) = 0} the graph of another function? When is Z nicely
More informationJUHA KINNUNEN. Harmonic Analysis
JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes
More informationDepartment of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016
Department of Mathematics, University of California, Berkeley YOUR 1 OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 1. Please write your 1- or 2-digit exam number on
More informationComplex Analysis MATH 6300 Fall 2013 Homework 4
Complex Analysis MATH 6300 Fall 2013 Homework 4 Due Wednesday, December 11 at 5 PM Note that to get full credit on any problem in this class, you must solve the problems in an efficient and elegant manner,
More information