Adaptive Application of Green s Functions:

Size: px
Start display at page:

Download "Adaptive Application of Green s Functions:"

Transcription

1 Adaptive Application of Green s Functions: Fast algorithms for integral transforms, with a bit of QM Fernando Pérez With: Gregory Beylkin (Colorado) and Martin Mohlenkamp (Ohio University) Department of Applied Mathematics University of Colorado, Boulder University of Michigan at Ann Arbor April 12, 2007

2 Outline 1 Context and motivation F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 2 / 49

3 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 2 / 49

4 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 2 / 49

5 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 2 / 49

6 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables 5 Quantum Mechanics F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 2 / 49

7 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables 5 Quantum Mechanics F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 3 / 49

8 Physical problems formulated as PDEs The Laplace/Poisson equations u = f The Schrödinger equation (for stationary states) ( 12 + V ) ψ = Eψ The modifed Stokes equation (time-stepping schemes for Navier-Stokes): αv µ v + p = f v = 0 A good fraction of the world s (scientific) computing time is devoted to the solution of this type of problem. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 4 / 49

9 Numerical approaches Broadly and loosely speaking, there are two main options; each has its own set of difficulties. Discretize differential operator and invert Sparse matrices (fast solvers) That represent unbounded operators... F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 5 / 49

10 Numerical approaches Broadly and loosely speaking, there are two main options; each has its own set of difficulties. Discretize differential operator and invert Sparse matrices (fast solvers) That represent unbounded operators... And hence are ill-conditioned and require pre-conditioning. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 5 / 49

11 Numerical approaches Broadly and loosely speaking, there are two main options; each has its own set of difficulties. Discretize differential operator and invert Sparse matrices (fast solvers) That represent unbounded operators... And hence are ill-conditioned and require pre-conditioning. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 5 / 49

12 Numerical approaches Broadly and loosely speaking, there are two main options; each has its own set of difficulties. Discretize differential operator and invert Sparse matrices (fast solvers) That represent unbounded operators... And hence are ill-conditioned and require pre-conditioning. Write integral formulation and apply integral operator (Green s functions) Well-conditioned objects... That lead to dense matrices F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 5 / 49

13 Numerical approaches Broadly and loosely speaking, there are two main options; each has its own set of difficulties. Discretize differential operator and invert Sparse matrices (fast solvers) That represent unbounded operators... And hence are ill-conditioned and require pre-conditioning. Write integral formulation and apply integral operator (Green s functions) Well-conditioned objects... That lead to dense matrices And don t easily generalize to multiple dimensions. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 5 / 49

14 What are we after? Immediate Goals Numerical algorithms with finite but controlled precision. Multiscale, fully adaptive algorithms. Approximations are a cousin of the Fast Multipole Method (FMM), but easier to generalize in dimension and kernel. Green s functions: G(r r ). Many fundamental physical processes are 2-body interactions (Nature cooperates). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 6 / 49

15 What are we after? Immediate Goals Numerical algorithms with finite but controlled precision. Multiscale, fully adaptive algorithms. Approximations are a cousin of the Fast Multipole Method (FMM), but easier to generalize in dimension and kernel. Green s functions: G(r r ). Many fundamental physical processes are 2-body interactions (Nature cooperates). Overall program The curse of dimensionality : the exponential rise in complexity of many algorithms with the underlying physical dimension. Multiparticle Schrödinger equation, Navier-Stokes. A toolbox of reliable algorithms for the efficient application of integral transforms in multiple dimensions. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 6 / 49

16 Multiresolution algorithms in multiple dimensions Key mathematical ideas: 1 Multiresolution analysis (wavelets): sparse matrix representations for a large class of kernels. 2 Separated representations: reduction of dimensionality cost. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 7 / 49

17 Multiresolution algorithms in multiple dimensions Key mathematical ideas: 1 Multiresolution analysis (wavelets): sparse matrix representations for a large class of kernels. 2 Separated representations: reduction of dimensionality cost. Group effort over many years (1988-today): 1 Gregory Beylkin, Lucas Monzón, Christopher Kurcz - CU Boulder 2 Martin Mohlenkamp - Ohio University 3 Robert Harrison, George Fann, Takeshi Yanai, Zhengting Gan - ORNL 4 Vani Cheruvu - (now at NCAR) 5 Robert Cramer - (now at Raytheon) F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 7 / 49

18 Functions Operators Operator application d = 1 recap Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables 5 Quantum Mechanics F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 8 / 49

19 Functions Operators Operator application d = 1 recap Multiresolution analysis, formally Let V j L 2 (R 3 ) denote subspaces of the multiresolution analysis corresponding to a multiwavelet basis of order p (p is the number of nodes per interval) We have V 0 V 1 V 2 V j or, for each V n, V n = V 0 W 0 W 1 W n 1 where V j W j = V j+1. The subspace V 0 is spanned by products of orthogonal polynomials up to degree p 1 in each variable, localized within cubes shifted by n Z 3. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 9 / 49

20 Functions Operators Operator application d = 1 recap Multiresolution analysis, intuitively Imagine a simple signal f(t) you want to study: F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 10 / 49

21 Functions Operators Operator application d = 1 recap Multiresolution analysis, intuitively Imagine a simple signal f(t) you want to study: At each scale n, divide the unit interval [0,1] into 2 n binary subintervals: F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 10 / 49

22 Multiresolution basics (2) And compute: Average (s) values of function at level n : space V n. Differences (d) between successive levels: space W n = V n+1 V n.

23 Multiresolution basics (2) And compute: Average (s) values of function at level n : space V n. Differences (d) between successive levels: space W n = V n+1 V n. f(t) can studied (compressed, denoised,...) from {V 0,W 0,W 1,...} : V2 V4 V9-full W2 W4 W The d coefficients are small and localized around changes. We ll use multiwavelets: p coefficients per subinterval ( high order Haar ).

24 Functions Operators Operator application d = 1 recap Adaptive subdivision of the unit interval in R d Simple recursive subdivision produces a d-binary tree on the unit interval, with p d coefficient blocks on the leaves: (0,1) (1,1) (0,0) (1,0) (2,1) (3,1) (2,0) (3,0) (4,1) (4,0) (5,0) (5,1) F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 12 / 49

25 Functions: adaptive, controlled accuracy decompositions Nnod = 12, ǫ = 1.0e 10, Nblocks = 21

26 Functions: adaptive, controlled accuracy decompositions Nnod = 12, ǫ = 1.0e 10, Nblocks = 21 Nnod = 10, ǫ = 5.0e 11, Nblocks = 634

27 Functions Operators Operator application d = 1 recap Applying an operator to a function: g = Tf Definitions Projections onto various subspaces For the subspaces V j, P j : L 2 ([0,1] d ) V j For the subspaces W j, the orthogonal projector Q j : L 2 ([0,1] d ) W j is defined as Q j = P j+1 P j. For an operator T : L 2 ([0,1] d ) L 2 ([0,1] d ), we define its projection T j : V j V j as T j = P j TP j. For a function f, f j = P j f. We can think of f j as a vector with 2 j p entries. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 14 / 49

28 Functions Operators Operator application d = 1 recap Applying an operator to a function: g = Tf Definitions Projections onto various subspaces Goal For the subspaces V j, P j : L 2 ([0,1] d ) V j For the subspaces W j, the orthogonal projector Q j : L 2 ([0,1] d ) W j is defined as Q j = P j+1 P j. For an operator T : L 2 ([0,1] d ) L 2 ([0,1] d ), we define its projection T j : V j V j as T j = P j TP j. For a function f, f j = P j f. We can think of f j as a vector with 2 j p entries. Build an efficient algorithm to compute g n = Tf n for a given input function f, with n chosen adaptively and controlled accuracy. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 14 / 49

29 Operators (d = 1): sparse representations Again, project the operator on each scale and use differences: T n = T 0 + (T 1 T 0 ) + (T 2 T 1 ) +... = T 0 + n j=1 D j. T j D j j = 3

30 Operators (d = 1): sparse representations Again, project the operator on each scale and use differences: T n = T 0 + (T 1 T 0 ) + (T 2 T 1 ) +... = T 0 + n j=1 D j. T j D j j = 3 j = 7

31 Operator application in d = 1 The non-standard form We wish to compute g = Tf using the telescopic expansion ĝ 0 = T 0 f 0 ĝ 1 = [T 1 (T 0 )]f 1 ĝ 2 = [T 2 (T 1 )]f ĝ j = [T j (T j 1 )]f j

32 Operator application in d = 1 The non-standard form We wish to compute g = Tf using the telescopic expansion ĝ 0 = T 0 f 0 ĝ 1 = [T 1 (T 0 )]f 1 ĝ 2 = [T 2 (T 1 )]f ĝ j = [T j (T j 1 )]f j For this, we must construct a redundant tree out of our adaptive one. Nnod = 8, ǫ = 1.0e 04, Nblocks = 17 Nnod = 8, ǫ = 1.0e 04, Nblocks = 33 =

33 Functions Operators Operator application d = 1 recap Natural-scale application The modified nsf Observations We do not have to apply the blocks of the lifted operator T j 1 on the finer scale j We first truncate the bands on scale j for the difference (T j T j 1 ). This defines a banded operator on scale j, T b j j with band b j. We then bring back T j 1 to its natural scale, j 1. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 17 / 49

34 Functions Operators Operator application d = 1 recap Natural-scale application The modified nsf Observations We do not have to apply the blocks of the lifted operator T j 1 on the finer scale j We first truncate the bands on scale j for the difference (T j T j 1 ). This defines a banded operator on scale j, T b j j with band b j. We then bring back T j 1 to its natural scale, j 1. Algorithmic implications All operations are done on indexing tables These tables are then used to drive the application of the operator F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 17 / 49

35 The Modified NSF A graphical illustration of what we gain for T j (j = 5 shown):

36 The Modified NSF A graphical illustration of what we gain for T j (j = 5 shown):

37 Natural-scale application Mathematically, this means that: ĝ 0 = [T 0 T b 1/2 0 ]f 0 ĝ 1 = [T 1 T b 2/2 1 ]f 1 ĝ 2 = [T 2 T b 3/2 2 ]f ĝ j = T b j j f j and the fully assembled result for any given scale n is given by: [( ) [( ) g n = T b n n f n+ T b n 1 n 1 T b n/2 n 1 f n 1 + T b n 2 n 2 T b n 1/2 n 2 f n [ [( ]] ]]... + T 0 T b 1/2 0 )f 0....

38 Natural-scale application Mathematically, this means that: ĝ 0 = [T 0 T b 1/2 0 ]f 0 ĝ 1 = [T 1 T b 2/2 1 ]f 1 ĝ 2 = [T 2 T b 3/2 2 ]f ĝ j = T b j j f j and the fully assembled result for any given scale n is given by: [( ) [( ) g n = T b n n f n+ T b n 1 n 1 T b n/2 n 1 f n 1 + T b n 2 n 2 T b n 1/2 n 2 f n [ [( ]] ]]... + T 0 T b 1/2 0 )f Observation Inter-scale interactions are dealt with by computationally inexpensive projections steps, not at operator application time.

39 Functions Operators Operator application d = 1 recap Adaptive natural-scale application A graphical representation F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 20 / 49

40 Functions Operators Operator application d = 1 recap Numerical example in 1D Consider the periodic analogue of the Hilbert transform. (Cf)(y) = p.v. applied to the periodic function Z 1 f(x) = e a(x+k 1/2)2 (Cf)(y) = i k Z 0 cot(π(y x))f(x)dx, π sign(n)e n2 π 2 /a e 2πiny a n Z F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 21 / 49

41 Functions Operators Operator application d = 1 recap Numerical example in 1D Consider the periodic analogue of the Hilbert transform. (Cf)(y) = p.v. applied to the periodic function Z 1 f(x) = e a(x+k 1/2)2 (Cf)(y) = i k Z 0 cot(π(y x))f(x)dx, π sign(n)e n2 π 2 /a e 2πiny a n Z F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 21 / 49

42 Functions Operators Operator application d = 1 recap d = 1, lessons learned The Good 1 Sparse representations of operators via multiwavelets lead to fast algorithms. 2 Accuracy is guaranteed by construction. 3 We can efficiently handle multi-scale interactions. 4 While the example was a convolution (fast via FFT?), we have an automatically adaptive algorithm. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 22 / 49

43 Functions Operators Operator application d = 1 recap d = 1, lessons learned The Good 1 Sparse representations of operators via multiwavelets lead to fast algorithms. 2 Accuracy is guaranteed by construction. 3 We can efficiently handle multi-scale interactions. 4 While the example was a convolution (fast via FFT?), we have an automatically adaptive algorithm. The Bad This approach does not directly extend to d > 1. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 22 / 49

44 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables 5 Quantum Mechanics F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 23 / 49

45 The curse of dimensionality A simple observation: numerical algorithms (C = AB, y = Ax) in d physical dimensions scale exponentially with d in complexity. Not good. A typical example: Poisson s equation (electromagnetics, gravity,... ): 2 φ(r) = ρ(r) F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 24 / 49

46 The curse of dimensionality A simple observation: numerical algorithms (C = AB, y = Ax) in d physical dimensions scale exponentially with d in complexity. Not good. A typical example: Poisson s equation (electromagnetics, gravity,... ): 2 φ(r) = ρ(r) A Green s function solution (free space, d = 3, ignore constants) : Z φ(r) = Z G(r r )ρ(r )d 3 r = 1 r r ρ(r )d 3 r. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 24 / 49

47 The curse of dimensionality A simple observation: numerical algorithms (C = AB, y = Ax) in d physical dimensions scale exponentially with d in complexity. Not good. A typical example: Poisson s equation (electromagnetics, gravity,... ): 2 φ(r) = ρ(r) A Green s function solution (free space, d = 3, ignore constants) : Z φ(r) = Z G(r r )ρ(r )d 3 r = If we discretize using a global basis, this becomes: φ ijk = N i j k =1 G ii,jj,kk ρ i j k 1 r r ρ(r )d 3 r. Applying an integral kernel is a matrix-vector multiplication. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 24 / 49

48 Can we do this efficiently for d > 1? What if we could write: M G ii,jj,kk = w m F m ii F m jj F m kk. m=1 F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 25 / 49

49 Can we do this efficiently for d > 1? What if we could write: M G ii,jj,kk = w m F m ii F m jj F m kk. m=1 We could separate the different dimensions: φ ijk = M m=1 The problem partially factorizes. w m F m i ii j F m jj k F m kk ρ i j k F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 25 / 49

50 Can we do this efficiently for d > 1? What if we could write: M G ii,jj,kk = w m F m ii F m jj F m kk. m=1 We could separate the different dimensions: φ ijk = M m=1 The problem partially factorizes. w m F m i And if we can construct sparse F m ii multidimensional algorithm. ii j F m jj k F m kk ρ i j k representations, we may have a fast, F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 25 / 49

51 Operators(d > 1): Gaussians to the rescue We can approximate a wide class of kernels as sums of Gaussians: M 1 r r w m e τ m r r 2, m=1 with controlled accuracy ε over a large dynamic range [M O( logε)]: This gives us the factorization we wanted for our kernel: M G ii,jj,kk = w m F m ii F m jj F m kk. m=1

52 Sparsity in d > 1 Problem How do we construct sparse multidimensional operators? F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 27 / 49

53 Sparsity in d > 1 Problem How do we construct sparse multidimensional operators? Solution Use the separated structure of the modified ns-form T l j (T l j 1) = = M m=1 w m F j;ml 1 F j;ml 2 F j;ml 3 ( M w m F j 1;ml 1 F j 1;ml 2 F j 1;ml 3 m=1 M w m F j;ml 1 F j;ml 2 F j;ml 3 m=1 M m=1 w m (F j 1;ml 1 ) (F j 1;ml 2 ) (F j 1;ml 3 ). ) F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 27 / 49

54 Norm considerations for sparsity in d > 1 Solution... The norm of T l j (Tl j 1 ) decays rapidly with l, l = (l 1,l 2,l 3 ). We can define With these, we can estimate T l j (T l j 1) 1 6 N j;m;l = F dif j;m;l (F j 1;m;l ), N j;m;l F = F j;m;l, N j;m;l F = (F j 1;m;l ), M m=1 [ w m sym N j;m;l 1 F N j;m;l 2 dif N j;m;l 1 dif N j;m;l 2 F N j;m;l 3 F + N j;m;l 3 F + N j;m;l 1 F N j;m;l 2 F N j;m;l 3 dif ].

55 Norm considerations for sparsity in d > 1 Solution... The norm of T l j (Tl j 1 ) decays rapidly with l, l = (l 1,l 2,l 3 ). We can define With these, we can estimate Note T l j (T l j 1) 1 6 N j;m;l = F dif j;m;l (F j 1;m;l ), N j;m;l F = F j;m;l, N j;m;l F = (F j 1;m;l ), M m=1 [ w m sym N j;m;l 1 F N j;m;l 2 dif N j;m;l 1 dif This is strictly based on one-dimensional quantities. N j;m;l 2 F N j;m;l 3 F + N j;m;l 3 F + N j;m;l 1 F N j;m;l 2 F N j;m;l 3 dif ].

56 Sparsity also in the m direction? The Gaussian expansion gave us separation of directions... F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 29 / 49

57 Sparsity also in the m direction? The Gaussian expansion gave us separation of directions... at the cost of a new internal degree of freedom, the separation index m. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 29 / 49

58 Sparsity also in the m direction? The Gaussian expansion gave us separation of directions... at the cost of a new internal degree of freedom, the separation index m. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 29 / 49

59 Sparsity also in the m direction? The Gaussian expansion gave us separation of directions... at the cost of a new internal degree of freedom, the separation index m. Do we really need all these terms? F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 29 / 49

60 The 2-scale differences cancel most terms F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 30 / 49

61 The multidimensional algorithm Use the Gaussian approximation to express the kernel in separable form. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 31 / 49

62 The multidimensional algorithm Use the Gaussian approximation to express the kernel in separable form. Construct all the one-dimensional sparse structures as for d = 1. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 31 / 49

63 The multidimensional algorithm Use the Gaussian approximation to express the kernel in separable form. Construct all the one-dimensional sparse structures as for d = 1. Compute the norm estimates for the separation series effective separation range. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 31 / 49

64 The multidimensional algorithm Use the Gaussian approximation to express the kernel in separable form. Construct all the one-dimensional sparse structures as for d = 1. Compute the norm estimates for the separation series effective separation range. Observe that different values of l have different effective ranges this defines various regions (based on how much the singularity of the kernel contributes). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 31 / 49

65 The multidimensional algorithm Use the Gaussian approximation to express the kernel in separable form. Construct all the one-dimensional sparse structures as for d = 1. Compute the norm estimates for the separation series effective separation range. Observe that different values of l have different effective ranges this defines various regions (based on how much the singularity of the kernel contributes). Build all the necessary indexing tables (within blocks, in separation index, per region). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 31 / 49

66 The multidimensional algorithm Use the Gaussian approximation to express the kernel in separable form. Construct all the one-dimensional sparse structures as for d = 1. Compute the norm estimates for the separation series effective separation range. Observe that different values of l have different effective ranges this defines various regions (based on how much the singularity of the kernel contributes). Build all the necessary indexing tables (within blocks, in separation index, per region). Apply the lot, much like in the d = 1 case. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 31 / 49

67 Poisson s equation: an example Let s solve Poisson s equation for a simple case with a known solution, a sum of Gaussians: ρ(r;α) = 3 i=1 (6α 4α 2 r 2 i )e αr 2 i = φ(r;α) = 3 i=1 e αr 2 i, α = 300. We compute φ(r) = 1 Z ρ(r ) 4π R 3 r r dr. Timings: done on a Pentium 4, 2.8 GHz machine Tolerance Resulting ε L 2 Basis order App. time (s) MFLOPS F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 32 / 49

68 Poisson s equation: an example Let s solve Poisson s equation for a simple case with a known solution, a sum of Gaussians: ρ(r;α) = 3 i=1 (6α 4α 2 r 2 i )e αr 2 i = φ(r;α) = 3 i=1 e αr 2 i, α = 300. We compute φ(r) = 1 Z ρ(r ) 4π R 3 r r dr. Timings: done on a Pentium 4, 2.8 GHz machine Tolerance Resulting ε L 2 Basis order App. time (s) MFLOPS F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 32 / 49

69 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables 5 Quantum Mechanics F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 33 / 49

70 The Separated Representation The standard separation of variables: f(x 1,x 2,...,x d ) = φ 1 (x 1 ) φ 2 (x 2 ) φ d (x d ) only leads to exact solutions in a limited number of cases.

71 The Separated Representation The standard separation of variables: f(x 1,x 2,...,x d ) = φ 1 (x 1 ) φ 2 (x 2 ) φ d (x d ) only leads to exact solutions in a limited number of cases. Definition For a given ε, we represent a matrix A = A(j 1,j 1 ;...;j d,j d ) in dimension d as r l=1 s l A l 1(j 1,j 1)A l 2(j 2,j 2) A l d(j d,j d), where s l is a scalar, s 1 s r > 0, and A l i are matrices with entries A l i (j i,j i ) and norm one. We require: A r l=1 s l A l 1 A l 2 A l d ε. We call the scalars s l separation values and the rank r the separation rank.

72 Remarks on the Separated Representation Fundamentally developed by G. Beylkin and M. Mohlenkamp. For d = 2, it reduces to the well-known SVD. Separated representations are not unique: if d 3 then the analogy with SVD breaks down: by changing ε we change all terms rather than add/delete terms. They can be ill-conditioned: we have developed algorithms for reducing the separation rank while controlling the condition number. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 35 / 49

73 Remarks on the Separated Representation Fundamentally developed by G. Beylkin and M. Mohlenkamp. For d = 2, it reduces to the well-known SVD. Separated representations are not unique: if d 3 then the analogy with SVD breaks down: by changing ε we change all terms rather than add/delete terms. They can be ill-conditioned: we have developed algorithms for reducing the separation rank while controlling the condition number. Efficient computation in this framework requires a combined consideration of both operators and functions. An operator can be also written as T = r l=1 s l B l 1... B l d. A major computational challenge lies in reducing the naïve rank rr of g = T f if f is represented in separated form. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 35 / 49

74 An illustration: a new trigonometric identity Consider the expression of d variables (x 1,x 2,...,x d ): ( d j=1xj ) f(x) = sin. If expanded using textbook trig identities, this produces 2 d 1 terms. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 36 / 49

75 An illustration: a new trigonometric identity Consider the expression of d variables (x 1,x 2,...,x d ): ( d j=1xj ) f(x) = sin. If expanded using textbook trig identities, this produces 2 d 1 terms. However, the following is true: f(x) = d j=1 sin(x j ) d k=1,k j sin(x k + α k α j ) sin(α k α j ) for all choices of {α j } such that α j α k for all j k. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 36 / 49

76 An illustration: a new trigonometric identity Consider the expression of d variables (x 1,x 2,...,x d ): ( d j=1xj ) f(x) = sin. If expanded using textbook trig identities, this produces 2 d 1 terms. However, the following is true: f(x) = d j=1 sin(x j ) d k=1,k j sin(x k + α k α j ) sin(α k α j ) for all choices of {α j } such that α j α k for all j k. This is an identity, not an approximate result. It was discovered via numerical experimentation. The separated representation is not unique... But many choices are ill-conditioned. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 36 / 49

77 Outline 1 Context and motivation 2 Multiresolution analysis in d = 1 3 Multidimensional problems 4 d 1: beyond separation of variables 5 Quantum Mechanics F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 37 / 49

78 Quantum mechanics (H atom) Schrödinger s equation in an integral (Lippman-Schwinger) formulation: We write 1 2 ψ 1 r ψ = Eψ, φ = 2G µ Vφ, where G µ = ( + µ 2 I ) 1 is the Green s function for some µ and V = 1/r is the nuclear potential. This can be solved iteratively. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 38 / 49

79 Quantum mechanics (H atom) Schrödinger s equation in an integral (Lippman-Schwinger) formulation: We write 1 2 ψ 1 r ψ = Eψ, φ = 2G µ Vφ, where G µ = ( + µ 2 I ) 1 is the Green s function for some µ and V = 1/r is the nuclear potential. This can be solved iteratively. 1 Initialize with some value µ 0 and function φ. 2 Apply the Green s function G µ to the product Vφ to compute: 3 Compute the energy for φ new, E new = φ new = 2G µ Vφ. 1 φ 2 new, φ new + Vφ new,φ new. φ new,φ new 4 Set µ = 2E new, φ = φ new / φ new and return to Step 2. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 38 / 49

80 Error for the Hydrogen atom ground state eigenvalue F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 39 / 49

81 The Multiparticle Schrödinger equation Consider the Hamiltonian of the system with K nuclei and N e electrons, H = N e j=1 ( j K k=1 Z k x j R k ) + i>j 1 x i x j. The ultimate goal is to solve the eigenvalue problem H ψ = Eψ without resorting to a reduced model. The curse of dimensionality, the additional requirement of antisymmetry, size extensivity are just several of many issues that need to be resolved. We need need numerical methods to transition from one-particle theories (e.g. Kohn-Sham, HF) to a two particle approach. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 40 / 49

82 Antisymmetry Electrons are fermions: the wave function must be antisymmetric, e.g., ψ(γ 2,γ 1,γ 3,...) = ψ(γ 1,γ 2,γ 3,...), where γ = ((x,y,z),σ) and σ represents the spin degrees of freedom. For a function of N variables, the antisymmetrizer A is defined as A = 1 N! p S N ( 1) p P, where S N is the permutation group on N elements. If A is applied to a separable function, then the result can be expressed as a Slater determinant, φ 1 (γ 1 ) φ 1 (γ 2 ) φ 1 (γ N ) A N j (γ j ) = j=1φ 1 φ 2 (γ 1 ) φ 2 (γ 2 ) φ 2 (γ N ) N! φ N (γ 1 ) φ N (γ 2 ) φ N (γ N ) F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 41 / 49

83 The Wavefunction: an unconstrained sum of Slater determinants We consider approximations to the wavefunction of the form φ l 1 ψ (r) = A r N l l=1s i=1φ l i(γ i ) = 1 r (γ 1) φ l 1 (γ 2) φ l 1 (γ N) φ l 2 N! s l (γ 1) φ l 2 (γ 2) φ l 2 (γ N) l=1... φ l N (γ 1) φ l N (γ 2) φ l N (γ N). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 42 / 49

84 The Wavefunction: an unconstrained sum of Slater determinants We consider approximations to the wavefunction of the form φ l 1 ψ (r) = A r N l l=1s i=1φ l i(γ i ) = 1 r (γ 1) φ l 1 (γ 2) φ l 1 (γ N) φ l 2 N! s l (γ 1) φ l 2 (γ 2) φ l 2 (γ N) l=1... φ l N (γ 1) φ l N (γ 2) φ l N (γ N). We impose no constraints (such as orthogonality). Without constraints, we can have ψ = A N i=1 φ i (γ i ) +Ac N i=1 (φ i (γ i ) + φ i+n (γ i )) where {φ j } 2N j=1 form an orthonormal set. Conventional methods that constrain the factors to come from an orthogonal set are forced to multiply out the second term and thus obtain 2 N terms. F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 42 / 49

85 Weak Formulation The number of terms in A N j=1 φ j (γ j ) grows exponentially fast and, although this number can algebraically be reduced somewhat. However, if we care only about computing inner products with A N j=1 φ j (γ j ), then the so-called Löwdin rules provide a solution, φ 1, φ 1 φ 1, φ 2 φ 1, φ N A N φ j (γ j ),A N j=1 φ j (γ j ) = 1 φ 2, φ 1 φ 2, φ 2 φ 2, φ N j=1 N! φ N, φ 1 φ N, φ 2 φ N, φ N Computing determinant costs at most O(N 3 ). For large N the matrix is banded and the cost is O(N). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 43 / 49

86 Computed slices of a He wavefunction with r = 4 F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 44 / 49

87 Computed φ 1 1 (,,0; 1/2), s 3 = F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 45 / 49

88 Computed φ 2 1 (,,0; 1/2), s 3 = F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 46 / 49

89 Computed φ 3 1 (,,0; 1/2), s 3 = F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 47 / 49

90 Summary A new fast, adaptive algorithm for applying integral kernels via multiwavelets with guaranteed precision. Implementation exists for d = 1,2,3. Core algorithm is kernel-independent, already implemented for several physically important kernels. Our current big goal: multiparticle Schrödinger equation (in progress). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 48 / 49

91 Summary Outlook A new fast, adaptive algorithm for applying integral kernels via multiwavelets with guaranteed precision. Implementation exists for d = 1,2,3. Core algorithm is kernel-independent, already implemented for several physically important kernels. Our current big goal: multiparticle Schrödinger equation (in progress). The structure of the wavefunction and of the multiparticle Green s function: there s still much to learn. Navier-Stokes: a project of its own. Other boundary conditions, more operators: we want a high-level toolbox based on these ideas for end-users. Complex geometries, scattering, time-dependent problems, etc. Undeveloped... F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 48 / 49

92 References Appendix References Beylkin, Coifman, Rokhlin Comm. Pure App. Math. 44, (1991) Alpert SIAM J. Math. Anal. 24, (1993) Alpert, Beylkin, Gines, Vozovoi J. Comp. Phys. 182, (2002) Beylkin and Mohlenkamp Proc. Nat. Acad. Sci. 99, (2002) Beylkin and Monzón App. Comp. Harmonic Analysis 12, (2002) Beylkin, Cheruvu and Pérez J. Comp. Phys. (submitted). Beylkin, Mohlenkamp and Pérez J. Comp. Phys. (in preparation). F. P. (App. Math - CU Boulder) Adaptive Green s Functions UMich 49 / 49

Representation of operators in MADNESS (Multiresolution ADaptive Numerical Environment for Scientific Simulation)

Representation of operators in MADNESS (Multiresolution ADaptive Numerical Environment for Scientific Simulation) Representation of operators in MADNESS (Multiresolution ADaptive Numerical Environment for Scientific Simulation) Gregory Beylkin University of Colorado at Boulder IPAM: Big Data Meets Computation workshop

More information

Physics, Mathematics and Computers

Physics, Mathematics and Computers Physics, Mathematics and Computers Better algorithms + Better tools = Better science! Fernando Pérez Department of Applied Mathematics University of Colorado, Boulder. U. C. Berkeley May 16, 2007 Outline

More information

A Tutorial on Wavelets and their Applications. Martin J. Mohlenkamp

A Tutorial on Wavelets and their Applications. Martin J. Mohlenkamp A Tutorial on Wavelets and their Applications Martin J. Mohlenkamp University of Colorado at Boulder Department of Applied Mathematics mjm@colorado.edu This tutorial is designed for people with little

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

On the fast algorithm for multiplication of functions in the wavelet bases

On the fast algorithm for multiplication of functions in the wavelet bases Published in Proceedings of the International Conference Wavelets and Applications, Toulouse, 1992; Y. Meyer and S. Roques, edt., Editions Frontieres, 1993 On the fast algorithm for multiplication of functions

More information

arxiv: v1 [cs.lg] 26 Jul 2017

arxiv: v1 [cs.lg] 26 Jul 2017 Updating Singular Value Decomposition for Rank One Matrix Perturbation Ratnik Gandhi, Amoli Rajgor School of Engineering & Applied Science, Ahmedabad University, Ahmedabad-380009, India arxiv:70708369v

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov Week 1. Introduction. What are multipole methods and what is this course about. Problems from physics, mathematics,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Poisson Solver, Pseudopotentials, Atomic Forces in the BigDFT code

Poisson Solver, Pseudopotentials, Atomic Forces in the BigDFT code CECAM Tutorial on Wavelets in DFT, CECAM - LYON,, in the BigDFT code Kernel Luigi Genovese L_Sim - CEA Grenoble 28 November 2007 Outline, Kernel 1 The with Interpolating Scaling Functions in DFT for Interpolating

More information

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS COMM. APP. MATH. AND COMP. SCI. Vol., No., ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS PER-GUNNAR MARTINSSON, VLADIMIR ROKHLIN AND MARK TYGERT We observe that, under

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

A Multiresolution Strategy for Reduction of Elliptic PDEs and Eigenvalue Problems

A Multiresolution Strategy for Reduction of Elliptic PDEs and Eigenvalue Problems APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS 5, 129 155 (1998) ARTICLE NO. HA970226 A Multiresolution Strategy for Reduction of Elliptic PDEs and Eigenvalue Problems Gregory Beylkin* and Nicholas Coult

More information

Wavelet-Based Numerical Homogenization for Scaled Solutions of Linear Matrix Equations

Wavelet-Based Numerical Homogenization for Scaled Solutions of Linear Matrix Equations International Journal of Discrete Mathematics 2017; 2(1: 10-16 http://www.sciencepublishinggroup.com/j/dmath doi: 10.11648/j.dmath.20170201.13 Wavelet-Based Numerical Homogenization for Scaled Solutions

More information

Degenerate Perturbation Theory. 1 General framework and strategy

Degenerate Perturbation Theory. 1 General framework and strategy Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying

More information

Approximate Principal Components Analysis of Large Data Sets

Approximate Principal Components Analysis of Large Data Sets Approximate Principal Components Analysis of Large Data Sets Daniel J. McDonald Department of Statistics Indiana University mypage.iu.edu/ dajmcdon April 27, 2016 Approximation-Regularization for Analysis

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Methods of Fermion Monte Carlo

Methods of Fermion Monte Carlo Methods of Fermion Monte Carlo Malvin H. Kalos kalos@llnl.gov Institute for Nuclear Theory University of Washington Seattle, USA July, 2013 LLNL-PRES-XXXXXX This work was performed under the auspices of

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

524 Jan-Olov Stromberg practice, this imposes strong restrictions both on N and on d. The current state of approximation theory is essentially useless

524 Jan-Olov Stromberg practice, this imposes strong restrictions both on N and on d. The current state of approximation theory is essentially useless Doc. Math. J. DMV 523 Computation with Wavelets in Higher Dimensions Jan-Olov Stromberg 1 Abstract. In dimension d, a lattice grid of size N has N d points. The representation of a function by, for instance,

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Recap Yesterday we talked about several new, important concepts

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Positive Definite Kernels: Opportunities and Challenges

Positive Definite Kernels: Opportunities and Challenges Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer Lecture 9, February 8, 2006 The Harmonic Oscillator Consider a diatomic molecule. Such a molecule

More information

Pseudopotential generation and test by the ld1.x atomic code: an introduction

Pseudopotential generation and test by the ld1.x atomic code: an introduction and test by the ld1.x atomic code: an introduction SISSA and DEMOCRITOS Trieste (Italy) Outline 1 2 3 Spherical symmetry - I The Kohn and Sham (KS) equation is (in atomic units): [ 1 ] 2 2 + V ext (r)

More information

Introduction to Hartree-Fock Molecular Orbital Theory

Introduction to Hartree-Fock Molecular Orbital Theory Introduction to Hartree-Fock Molecular Orbital Theory C. David Sherrill School of Chemistry and Biochemistry Georgia Institute of Technology Origins of Mathematical Modeling in Chemistry Plato (ca. 428-347

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Quantum Theory and Group Representations

Quantum Theory and Group Representations Quantum Theory and Group Representations Peter Woit Columbia University LaGuardia Community College, November 1, 2017 Queensborough Community College, November 15, 2017 Peter Woit (Columbia University)

More information

Linear algebra for MATH2601: Theory

Linear algebra for MATH2601: Theory Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................

More information

Compressed representation of Kohn-Sham orbitals via selected columns of the density matrix

Compressed representation of Kohn-Sham orbitals via selected columns of the density matrix Lin Lin Compressed Kohn-Sham Orbitals 1 Compressed representation of Kohn-Sham orbitals via selected columns of the density matrix Lin Lin Department of Mathematics, UC Berkeley; Computational Research

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

OPTIMIZATION VIA SEPARATED REPRESENTATIONS AND THE CANONICAL TENSOR DECOMPOSITION

OPTIMIZATION VIA SEPARATED REPRESENTATIONS AND THE CANONICAL TENSOR DECOMPOSITION OPTIMIZATION VIA SEPARATED REPRESENTATIONS AND THE CANONICAL TENSOR DECOMPOSITION MATTHEW J REYNOLDS?,GREGORYBEYLKIN,ANDALIREZADOOSTAN? Abstract. We introduce a new, quadratically convergent algorithm

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Mitglied der Helmholtz-Gemeinschaft Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Birkbeck University, London, June the 29th 2012 Edoardo Di Napoli Motivation and Goals

More information

Linear Algebra. Maths Preliminaries. Wei CSE, UNSW. September 9, /29

Linear Algebra. Maths Preliminaries. Wei CSE, UNSW. September 9, /29 September 9, 2018 1/29 Introduction This review focuses on, in the context of COMP6714. Key take-away points Matrices as Linear mappings/functions 2/29 Note You ve probability learned from matrix/system

More information

Machine Learning with Quantum-Inspired Tensor Networks

Machine Learning with Quantum-Inspired Tensor Networks Machine Learning with Quantum-Inspired Tensor Networks E.M. Stoudenmire and David J. Schwab Advances in Neural Information Processing 29 arxiv:1605.05775 RIKEN AICS - Mar 2017 Collaboration with David

More information

Wavelets For Computer Graphics

Wavelets For Computer Graphics {f g} := f(x) g(x) dx A collection of linearly independent functions Ψ j spanning W j are called wavelets. i J(x) := 6 x +2 x + x + x Ψ j (x) := Ψ j (2 j x i) i =,..., 2 j Res. Avge. Detail Coef 4 [9 7

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Transform methods. and its inverse can be used to analyze certain time-dependent PDEs. f(x) sin(sxπ/(n + 1))

Transform methods. and its inverse can be used to analyze certain time-dependent PDEs. f(x) sin(sxπ/(n + 1)) AMSC/CMSC 661 Scientific Computing II Spring 2010 Transforms and Wavelets Dianne P. O Leary c 2005,2010 Some motivations: Transform methods The Fourier transform Fv(ξ) = ˆv(ξ) = v(x)e ix ξ dx, R d and

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator

Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator Reflections in Hilbert Space III: Eigen-decomposition of Szegedy s operator James Daniel Whitfield March 30, 01 By three methods we may learn wisdom: First, by reflection, which is the noblest; second,

More information

Multiresolution Analysis

Multiresolution Analysis Multiresolution Analysis DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Frames Short-time Fourier transform

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

Introduction to Discrete-Time Wavelet Transform

Introduction to Discrete-Time Wavelet Transform Introduction to Discrete-Time Wavelet Transform Selin Aviyente Department of Electrical and Computer Engineering Michigan State University February 9, 2010 Definition of a Wavelet A wave is usually defined

More information

CS60021: Scalable Data Mining. Dimensionality Reduction

CS60021: Scalable Data Mining. Dimensionality Reduction J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

A fast randomized algorithm for computing a Hierarchically Semi-Separable representation of a matrix

A fast randomized algorithm for computing a Hierarchically Semi-Separable representation of a matrix A fast randomized algorithm for computing a Hierarchically Semi-Separable representation of a matrix P.G. Martinsson, Department of Applied Mathematics, University of Colorado at Boulder Abstract: Randomized

More information

3 Symmetry Protected Topological Phase

3 Symmetry Protected Topological Phase Physics 3b Lecture 16 Caltech, 05/30/18 3 Symmetry Protected Topological Phase 3.1 Breakdown of noninteracting SPT phases with interaction Building on our previous discussion of the Majorana chain and

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

CS168: The Modern Algorithmic Toolbox Lecture #11: The Fourier Transform and Convolution

CS168: The Modern Algorithmic Toolbox Lecture #11: The Fourier Transform and Convolution CS168: The Modern Algorithmic Toolbox Lecture #11: The Fourier Transform and Convolution Tim Roughgarden & Gregory Valiant May 8, 2015 1 Intro Thus far, we have seen a number of different approaches to

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Applications of Matrix Functions Part III: Quantum Chemistry

Applications of Matrix Functions Part III: Quantum Chemistry Applications of Matrix Functions Part III: Quantum Chemistry Emory University Department of Mathematics and Computer Science Atlanta, GA 30322, USA Prologue The main purpose of this lecture is to present

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method Tim Roughgarden & Gregory Valiant April 15, 015 This lecture began with an extended recap of Lecture 7. Recall that

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 21 Quantum Mechanics in Three Dimensions Lecture 21 Physics 342 Quantum Mechanics I Monday, March 22nd, 21 We are used to the temporal separation that gives, for example, the timeindependent

More information

LU Factorization of Non-standard Forms and Direct Multiresolution Solvers

LU Factorization of Non-standard Forms and Direct Multiresolution Solvers APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS 5, 156 201 (1998) ARTICLE NO. HA970227 LU Factorization of Non-standard Forms and Direct Multiresolution Solvers D. Gines 1 Department of Electrical Engineering,

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Numerical Simulation of Spin Dynamics

Numerical Simulation of Spin Dynamics Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014 Introduction Discretization in time Computing the subpropagators

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Rapid evaluation of electrostatic interactions in multi-phase media

Rapid evaluation of electrostatic interactions in multi-phase media Rapid evaluation of electrostatic interactions in multi-phase media P.G. Martinsson, The University of Colorado at Boulder Acknowledgements: Some of the work presented is joint work with Mark Tygert and

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information

Course Requirements. Course Mechanics. Projects & Exams. Homework. Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications

Course Requirements. Course Mechanics. Projects & Exams. Homework. Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications Week 1. Introduction. Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov What are multipole methods and what is this course about. Problems from phsics, mathematics,

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

A new future for finite-element methods in Quantum Chemistry?

A new future for finite-element methods in Quantum Chemistry? A new future for finite-element methods in Quantum Chemistry? Kenneth Ruud Department of Chemistry University of Tromsø 9037 Tromsø Norway Geilo, 29/1 2007 A new future for finite-element methods in Quantum

More information