Steerable Filters in Motion Estimation

Size: px
Start display at page:

Download "Steerable Filters in Motion Estimation"

Transcription

1 Technical Report TR-L-0401 Steerable Filters in Motion Estimation Kai Krajsek Version 1.0 May 006 Computer Vision Group Institute for Applied Physics Johann Wolfgang Goethe-Universität Frankfurt am Main, Germany

2 Abstract This paper deals with steerable filters in context of motion estimation. The classical brightness constancy constraint equation (BCCE) is only one way to relate the optical flow with the first order directional derivative filter of the observed signal. We abstract from the first order directional derivatives to the most general class of filters nullifying the signal when applied in the direction of motion and give a finite response otherwise. A method to adjust linear combinations of such directional filters to the characteristics of the signal and noise has been derived. Steerability, one of the essential property of this class of filters is not fully covered by recent steerable approaches. The approach of Michaelis and Sommer [13] provide only an approach of of steerable filters deformed by Abelian Lie groups. Although the approach of Hel-Or and Teo [9] considers all kinds of Lie group transformations, its method for determining the basis functions may not converge. We extended these steerable approaches to arbitrary Lie groups, like the important case of the rotation group SO(3) in three dimensions. In contrast to theories above which either do not cover the case of non-abelian groups [13] or do work for any function to be made steerable[9] our approach uses the full power of Lie group theory to generate the minimum number of basis functions also for non-abelian compact Lie groups. It is a direct extension of the approach of [13] in which the basis functions are generated from the eigenfunctions of the generators of the Lie group. In our approach a Casimir operator is used to generate the basis functions also for non-abelian compact Lie groups. For non-abelian, non-compact groups we use polynomials as basis functions.

3 CONTENTS 1 Contents 1 Introduction 3 Foundations of differential motion estimation 3.1 Brightness constancy assumption Local constraints Confident measures Generalization of the differential based motion estimation 6 4 Derivative operators of discrete signals Optimal combination of different directional filters Motion estimation using directional filters 9 6 Steerable filters Steerable Filters in D Steerable functions for the rotation group Steerability based on Lie group theory Steerable filters in 3D Axial Symmetric Steerable Filters Steerability Based on Non Orthogonal Basis Functions AN EXTENDED STEERABLE APPROACH BASED ON LIE GROUP THEORY Conditions for Steerability The Basis Functions Basis Functions for Compact Lie Groups Basis Function for Non-Compact Lie Groups The Interpolation Functions Relation to Recent Approaches Examples of steerable filter kernels Example of a steerable function in D Steerable Function in 3D The transformation matrix P for the 3D rotation group A Steerable Function in 3D

4 CONTENTS 9 Motion estimation with steerable filters The generalized structure tensor Summary and Conclusion 30

5 3 1 Introduction Steerability is one of the essential properties of directional derivative operators. The direction of the derivative of a N dimensional function is determined by the coefficients of a linear combination of N partial derivatives. Thus, only N partial derivatives have to be performed in order to take the derivative in any direction. Moreover, after taking the derivative any further directional derivative can be gained by the appropriate linear combination of the filter responses without any further operations. Since filter operations of high dimensional signals are expensive, the steerability property is not only interesting for derivative operators in image processing. Freeman and Adelson [6] formulated this abstraction from the directional derivative to general shift invariant filters. Later this concept has been generalized from rotation to arbitrary Lie group transformations [4, 9, 13]. In order to lower further the minimum number of filter operations a approximative steerable techniques have been developed. [18]. [14] This report gives an overview of the concept of steerable filter in the context of optical flow estimation. Estimating the optical flow is one of the fundamental problems in image processing. Numerous techniques have been developed known as differential, correlation [1], energy based or phase based methods [5]. In the differential based motion estimation the directional derivative (usually the first order) of the signal is related with the optical flow. We abstract from derivative operators and examine the general properties of filters which can be used for this purpose. These filters are denoted as directional filters. It turns out that the main property, steerability for such operators, is not covered by recent steerable approaches. Thus, we extend the recent steerable approaches using Lie group theory. We conclude with a presentation of a motion estimator for directional filters. Foundations of differential motion estimation Optical flow is the D motion field generated by a projection of the 3D velocity of image points onto an image surface. In motion estimation we try to reconstruct this field by examining brightness changes in image sequences. One fundamental problem consists in the change of brightness due to various reasons, only one among them is motion. For instance, the brightness of an object could change over time without any movements due to changing of the light source. Particulary, brightness changes are the only available information for deducing motion in image sequences..1 Brightness constancy assumption In this paper we refer to the simplest assumption in differential motion estimation, the brightness s(x) at a certain point ˇx = (x, y) at time t vary only due to motion. This means that the brightness of certain objects or patterns do not change over time or more formally the total derivative of their brightness with respect to time is zero ds dt = s x x t + s y y t + s = 0. (1) t

6 . Local constraints 4 This equation is known as the brightness constancy constraint equation (BCCE). Obviously, such an equation is not sufficient to uniquely determine the two unknown flow parameters v 1 := x t, v = y t. This is denoted as the aperture problem of motion estimation. The BCCE defines a constraint line in the v 1, v space on which possible flow vectors can lie. Thus, only the optical flow component perpendicular to this line can be estimated. An additional constraint is needed to determine a unique flow vector. Two different approaches to get along with this problem are presented in the next sections. First we want to mention another way to express the BCCE. Considering the time as an additional dimension equally to the spatial dimension, we obtain the space time volume A of the image sequence. We treat now the spatial variables ˇx and the time variable t equally and denote a certain point in the space time volume with x = (x 1, x, x 3 ). From this point of view the motion estimation in an image sequence can be considered as a direction estimation problem in A. An object moving in a two dimensional image plane becomes a directional structure with constant brightness in A. Let r = (r 1, r, r 3 ) denote this direction denoted as the direction of motion. The task of estimating the motion of a pattern in an image sequence equals to the task of estimating the direction r of a signal s(x) within the space time volume A. The optical flow is related to the components of the direction of motion vector via v = ( r 1 r 3, r 1 r 3 ). Since the signal s(x) is constant along the direction of motion, the direction derivative s := r s = 0 () r nullifies the signal in this direction. Denoting the gradient of the signal with respect to x as g := s the BCCE can be written in the form: g T r = 0 (3) Again the aperture problem persists since equation (3) is only another formulation of the BCCE. Additional constraints, described in the next subsections, are needed to determine the optical flow v.. Local constraints In order to get along with the aperture problem additional BCCEs in a local neighborhood V A around the estimated optical flow vector could be considered [11]. This provides, if structures in space time change and the optical flow keeps constant within V, additional constraints determining uniquely the optical flow v. In order to cope also with those (more realistic) situations where the optical flow varies within V we allow small variations and compute the optical flow by a weighted least square fit, with the weighting function w(x) minimizing w (s x v 1 + s y v + s t ) dx (4) V The partial derivatives with respect to the two motion components v 1, v are both zero at the minimum leading to a matrix equation where the bracket w, indicates the weighted average. ( ) ( w, s x w, s x s y v1 w, s y s x w, s y v }{{} A ) = ( w, sx s t w, s y s t ) (5)

7 .3 Confident measures 5 which can be solved for a non singular matrix A. In regions where the gradient of the signal vanished the matrix becomes singular and the flow vector cannot be calculated. The same holds if all gradient vectors point in the same direction within V meaning the aperture problem persists in this area. In this case a larger neighborhood could be chosen hoping that the gradients change while the optical flow keeps nearly constant. There exists, so far, no reliable method determining the optimum average area V which is also denoted as the general aperture problem. Starting from equation (3) a tensor representation of the 3D space time image structure can be deduced [3] when considering the motion estimation as an orientation estimation problem. As in the case of the local least squared estimator the assumption of a constant direction of motion is not realistic. Thus, an average over the local region is performed leading to a total least squares estimate of the direction of motion r. { } ˆr = arg min = w g T r dx (6) r =1 V where w(x) is a weighing function selecting the size of the averaging area. Formulae (6) can be written in the following form reducing the minimization problem to an eigenvalue problem r T C g r min with C g := w(x)gg T dx (7) The estimated direction of motion ˆr is the eigenvector corresponding to the minimum eigenvalue. The structure tensor is symmetric implying the eigenvalues to be real and the eigenvectors to be orthogonal. V.3 Confident measures In order to quantify how reliable the estimate might be and to characterize although the contamination with noise different coherence measures have been developed [8]. The total coherence measure defined as ( ) λ1 λ 3 c s = (8) λ 1 + λ 3 where λ 1 and λ 3 denotes the smallest and the greatest eigenvalue, is zero for constant brightness and for distributed structures in the space time volume. It reaches 1 for the aperture problem and for constant motion. For increasing noise level c t degreases independent of the type of motion. The spatial coherence measure defined as ( ) λ1 λ c s = (9) λ 1 + λ where λ 1 and λ 3 denotes the smallest and the greatest eigenvalue, is zero for constant brightness and for distributed structures in the space time volume. It reaches 1 for the aperture problem and for constant motion. For increasing noise level c t decreases independent of the type of motion. It indicates the aperture problem.

8 6 3 Generalization of the differential based motion estimation There exists several approaches concerning the generalization of the BCCE towards more sophisticated signal models, e.g. considering exponential decay in the brightness over time. We generalize also from the classical BCCE but with respect to the first order direction derivative. We show that the first order direction derivative filter is only one example from a huge class of filters, denoted as directional filters which relates the observable data with the optical flow. Let us consider first higher order derivatives. Assuming brightness constancy along the direction of motion all higher order direction derivatives vanish as well s r =! 0 s r =! 0... (10) Thus, the first order derivative filter in the BCCE can be exchanged by any other directional derivative filter of arbitrary order. A less stringent constraint comprising all the constraint above in a linear relation is: s α 1 r + α s r + α 3 s 3 r 3... =! 0. (11) This is nothing but a generator for rich class of filters parameterized by a direction vector r. Are there even more filters relating the observed data to the optical flow and if yes, what are their properties? In order to answer this question it is useful to switch over to the Fourier domain. Let ˇx := (x 1, x ) denote the spatial coordinate in the space-time volume and s(ˇx, t) represents an image of constant moving patterns with velocity v. The spatio-temporal structure can be described by s(ˇx, t) = s(ˇx vt). (1) Let S(f) denote the spatial Fourier transform of the pattern, f and ω the coordinate corresponding to the spatial and temporal coordinate, respectively, and δ(f T v ω) the delta distribution. Then, the Fourier transform of s(ˇx, t) yields S(f, ω) = S(f)δ(f T v ω). (13) This means that constant moving patterns condenses to planes in the Fourier space defined by the argument of the delta distribution ω(f, v) = f T v. (14) Since the transfer function H r (f) is multiplied with the Fourier transform of the signal S(f), H r (f) has to be zero in the plane defined by Eq.(14). Its shape outside the plane can be chosen freely as long as it is not zero at all. If the impulse response h r (x) shall be real-valued, the corresponding transfer function h r (f) has to be real and symmetric h(x) = h( x), h IR or imaginary and antisymmetric h(x) = h( x), h jir, or a combination thereof [7].

9 7 4 Derivative operators of discrete signals Since derivatives are only defined for continuous signals, an interpolation of the discrete signal s(x n ), n {1,,..., N}, x n IR 3 to the assumed underlying continuous signal s(x) has to be performed [15] where c(x) denotes the continuous interpolation kernel s(x) r = xn r j s(x j )c(x x j ) = xn j ( s(x j ) j)) r c(x x. (15) x }{{ n } d r(x n) The right hand side of eq. (15) is the convolution of the discrete signal s(x n ) with the sampled derivative of the interpolation kernel d r (x n ), the impulse response of the derivative filter s(x n ) r = s(x n ) d r (x n ). (16) Since an ideal discrete derivative filter d r (x n ) has an infinite number of coefficients [10], an approximation ˆd r (x n ) has to be found. 4.1 Optimal combination of different directional filters In this section we abstract from linear combinations of derivative filters considering filter kernels h(x n ) expandable according to a finite number N of basis functions {b j (x n )} of some function space N h(x n ) = α j b j (x n ). (17) We derive a method for adapting the coefficients in eq.(17) such that the filter response of h(x n ) is as close as possible the filter response g of an ideal filter d(x n ) applied to a noise free signal s(x n ) g = d(x n ) s(x n ). (18) The observable signal z(x n ) is modeled as the sum of the unobservable ideal signal s(x n ) and a noise term v(x n ) z(x n ) = s(x n ) + v(x n ). (19) Since we are dealing with linear filtering of discrete signals we can express the filter process as a vector/matrix multiplication by staking the elements of the corresponding block in the space time volume one upon another. The filtered pixel ĝ (or respective voxel in spacetime) results then from the scalar product of the filter h with the vectorized signal z ĝ = h T z = h T s + h T v (0) We chose now the coefficients {α i } N 1 in eq.(17) such that the filter output ĝ is as close as possible to the filter response g of an ideal filter d to the pure noise free signal s. In

10 4.1 Optimal combination of different directional filters 8 order to measure this closeness we define the error between ĝ and g as the difference and determine the coefficients {α j } N 1 by minimizing the mean squared error [ E (ĝ g) ] min (1) In order to compute the expectation value we have to model the statistical properties of the signal and the noise processes, respectively. Let the noise vector v IR N be a zero-mean random vector with covariance matrix C v (which is in this case equal to its correlation matrix R v ): [ ] E [v] = 0 und E vv T = R v. Furthermore, we assume that the process generating the signal s IR N can be described by the expectation value m s of the signal vector, and its autocorrelation matrix R s. [ ] E [s] = m s und E ss T = R s. All these statistical moments can be measured from actual image data, although it must be taken care that the correlation matrices R s and R v should be positive definite. Our last assumption is that noise and signal are uncorrelated: [ ] E vz T = 0 () Knowing these first and second order statistical moments for both the noise as well as the signal allows the derivation of the optimum filter kernel coefficients {α i } N 1. Applying the signal model (m s, R s ) and the error model on the output ĝ of the filter kernel which should be optimized and the output g of the ideal filter kernel we obtain E [g] = d T m s und E [ĝ] = h T m s for the expectation values (first order moments), and [ E E [ ] [ ] [ ] (d T s)(d T s) T = E d T ss T d = d T E ss T d = d T R s d (3) g ] = E [ ĝ ] = (α 1 b 1 + α b + α 3 b ) T (R s + R v )(α 1 b 1 + α b + α 3 b ) (4) E [gĝ] = (α 1 b 1 + α b + α 3 b ) T R s d (5) for the second order statistical moments 1. The coefficients are then determined by minimizing mean square error (eq.(1)) leading to the matrix equation with R = R s + R v b T Rb 1 1 b T Rb 1 b T Rb 1 3 b T Rb 1 b T Rb b T Rb 3 b T Rb 3 1 b T Rb 3 b T Rb α 1 α α 3. = b 1 R s d b R s d b 3 R s d The inverse of the matrix exists which can be easily conducted considering the quadratic form where m ij denotes the components of matrix M.. (6) a T Ma = ij α i α j m ij (7) 1 Note: the second order moments are no variances because neither g nor ĝ are zero-mean random vectors!

11 9 The components m ij could be expressed by bilinear forms m ij = b i Rbj = kl b ikb jl r kl which leads again to quadratic form a T Ma = α i α j b ik b jl r kl = r kl α i b ik α j b jl = ijkl kjl i j kl }{{}}{{} γ k γ l γ k γ l r kl > 0 (8) which is positive definite caused by the positive definiteness of the correlation matrix. Thus, M is invertible. 5 Motion estimation using directional filters As in the case of the classical BCCE, the directional filter applied in the direction of motion r at a certain point x 0 in A h r s = 0 (9) does not relate the optical flow uniquely with the signal. In order to get along with the aperture problem eq.(9) is averaged over a local neighborhood in the space-time volume. In order to consider the more realistic situations of a (slightly) varying optical flow in a local neighborhood V around x 0, the optical flow is estimated by minimizing the local energy Q(r). The averaging function w(x) is used to give more weight to those points near x 0 Q(r) = V w h r s dx min. (30) These equation can be brought in the same form as the classical structure tensor as shown in section Steerable filters In order to find the minimum of the local energy Q(r) in eq.(30) the filter kernel h r (x) must be transformed in any direction of the space time volume. Thus, h r (x) has to be designed as a steerable function M h r (x) = a j (r)b j (x) (31) allowing the filter response h r (x) s(x) to be steered in any direction by an appropriate linear combination of the convolutions of the signal s(x) with the basis functions b j (x), j {1,,..., M} M h r (x) s(x) = a j (r)(b j (x) s(x)). (3) The basis functions b j (x) are independent of the direction r whereas only the coefficients a j (r) of the linear combination, denoted as the interpolation functions, depend on the direction r of the filter kernel. Questions arising with steerable functions are:

12 6.1 Steerable Filters in D 10 Under which condition can the function h r (x) be steered? How can the basis functions b j (r) be computed? How many basis functions are needed to steer the function h r (x)? How can the interpolation functions be determined? In the last decade, several steerable filter approaches have been developed trying to answer the questions, but all of them only tackle a special case either for the filter kernel or for the corresponding transformation groups. In the next section we present a full classification of a wide class of functions for all kinds of Lie group transformation. 6.1 Steerable Filters in D Since steerable filters have been originally developed for orientation estimation in image sequences most approaches are adjusted to D functions Steerable functions for the rotation group According to Freeman s original definition [6], functions h(x) : IR IR are denoted as steerable if their rotated version h α (x) around the angle α can be expressed by a linear combination of M basis functions h αj (x) M h α (x) = a j (α)h αj (x). (33) The basis functions are the rotated version of the original non-rotated function h(x). In principle, the direction of the basis functions can arbitrarily be distributed over the range of [0, π), but an equal distribution leads to simpler interpolation functions a j (α). It is well known that a transformation from cartesian to polar coordinates r = x + y, ϕ = arctan( y x ), transforms a D rotation R(α) into a simple shift operation in the ϕ coordinate: R(α)f(r, ϕ) = f(r, ϕ + α). A function is steerable if it can be expanded in a Fourier basis of finite length in ϕ The rotated version reads then h(r, ϕ + θ) = h(r, ϕ) = N n= N N n= N c n (r)e jn(ϕ+θ) = c n (r)e jnϕ. (34) N n= N e jnθ c n (r)e jnϕ. (35) If we set a n (θ) = e jnθ and b n (r, ϕ) := c n (r)e jnϕ, equ.(35) resembles the form of equ.(31), thus fulfilling the general requirement of a steerable function. The desired form of equ.(33) can be gained by a basis change from b n (r, ϕ) to the rotated versions of the filter kernel.

13 6.1 Steerable Filters in D 11 The corresponding interpolation functions of the new basis can be derived by inserting equ.(35) into equ.(33) leading to a matrix equation for the interpolation functions {a j (θ)}. 1 e iθ... e inθ = e iθ 1 e iθ... e iθ M.... e inθ 1 e inθ... e inθ M a 1 (θ) a (θ). a M (θ) For any n, with c n (r) = 0, the corresponding row of left side and of the matrix on the right hand side of equation can be omitted. The minimum number of basis functions equals the number of non zero Fourier coefficients considering that equation (35) is already steerable with the basis functions c n (r)e jnϕ and the interpolation functions e jnθ. In order to gain the form of equation (33) only a coordinate transform has to be applied which does not change the number of required basis functions. The following recipe for designing a steerable filter can be formulated. Let us first check if the filter kernel is expandable into a finite length Fourier series in the polar coordinate ϕ. If this requirement is fulfilled the interpolation functions can be computed according to Eq.(36) and the basis functions are the rotated versions of the filter kernel. The following example, illustrating this recipe, considers the first order derivative of a D Gaussian function g(x, y) = e (x +y ) in x direction. (36) g(x, y) x = g 0 1 (x, y) = xe (x +y ) (37) Expressing g1 0 (x, y) in polar coordinates and expanding it with respect to ϕ yields: g 0 x (x, y) = g 0 x (r, ϕ) = r cos(ϕ)e r = re r (e iϕ + e iϕ ) (38) Thus, two basis functions are required to steer gx 0 (x, y) in any direction. The matrix equation Eq.(36) becomes in this case ( ) ( ) ( ) e iθ = e iθ 1 e iθ a1 (θ) (39) a (θ) If we choose a equidistant distribution of the basis functions, θ 1 =0 and θ =90, and use the Euler s formula e iθ = cos(θ) + i sin(θ) the interpolation functions are: a 1 (θ) = cos(θ), a (θ) = sin(θ) (40) and the direction derivative of g θ in an arbitrary direction θ can be expressed as g θ 1(x, y) = cos(θ)g 0 1 (x, y) + sin(θ)g 90 1 (x, y) (41) 6.1. Steerability based on Lie group theory Since the rotation is not the only interesting transformation in image processing tasks, the steerable concept has been extended to other transformation groups like scaling and translation. [4]. Lie group theory [9, 13] provides a formal justification of general steerable approaches and delivers a deeper understanding of the Fourier decomposition approach of Freeman, e.g. it answers the question why the exponential basis in Eq.(35) gives the minimum required number of basis functions.

14 6.1 Steerable Filters in D 1 In the proceeding text we understand by steerability not only rotation but any Lie group transformation. First, for simplicity reasons, let us view the case of one parameter Lie groups. An important concept of Lie group theory is the generator L defined by the first order Taylor expansion of the transformed function g(τ)f(x) = f( x(τ)) f( x(τ)) = f(x) + f( x(τ)) τ = f(x) + Lf(x)τ + O(τ ) τ + O(τ ) (4) τ=0 The generator contains all necessary information about the group. The full group transformation can be reconstructed by a the full Taylor series g(τ)f(x) = n=0 (τl) n f(x). (43) n! For a given Lie group the basis functions b j (x) are given by the eigenfunctions of the corresponding generator L, where λ n denotes the corresponding eigenvalue Lb j (x) = λ j b j (x). (44) Since the steerable functions are expressed by a linear combination of the basis functions it is enough, in order to prove the statement above, to consider the action of the group transformation on one basis function. τ n L n g(τ)b j (x) = b j (x) = n! n=0 n=0 τ n λ n j n! b j (x) = exp (τλ j )b j (x) (45) It appears that the interpolation functions a j (τ) := exp (τλ j ) are the exponential map of the eigenvalues of the corresponding eigenfunctions b j (x). A function h(x) expandable according to a finite number M of basis b j (x) M h(x) = c j b j (x) (46) need exact the same number M of basis functions to be steered it in any direction τ M g(τ)h(x) = exp(λ j τ)c j b j (x) (47) Since the transformed basis function is proportional to original non transformed basis function the basis function is also a basis of a irreducible representation of the corresponding group transformation. It is well known that every one parameter Lie group is isomorph to the shift group performing an appropriate transformation of the coordinates and parameters of the Lie group. Thus, knowing the basis functions and interpolation functions of the shift group and the corresponding transformation from any Lie group to the shift group is enough to steer all functions with all one parameter Lie groups. The generator of the shift group yields g(τ)f(x) = f(x τ) = f(x) x f(x)τ + O(τ ) (48) L = x.

15 6.1 Steerable Filters in D 13 In this case the eigenvalue problem becomes an ordinary first order differential equation with the solutions x f n(x) = λ n f n (x) (49) f n (x) = e jλnx (50) The comparison with Eq.(35) reveals that these are exact the basis functions Freeman proposes for the rotation in D. The underlying reason is that the basis functions generated by the eigenfunction of the generator form an irreducible basis for a representation of the group. Another basis of the function space would lead to a larger number of basis functions of the steerable function. In order to demonstrate this, let us consider the following function expanded according to a polynomial basis f(x, y) = x + y. Rotating f(x, y) in any direction parameterized by θ need three polynomial terms f θ (x, y) = (αx + βy) + (γx + εy) (51) = (α + γ )x + (β + ε )y + (αβ + γε)xy. The expansion according to an exponential basis required only one basis function f(r, φ) = r e j0φ. For more parameter Abelian Lie groups {g(a) a IR k } simultaneous eigenfunctions of the different commutating generators can be found. Holding all parameters constant except of one a m defines a one parameter subgroup and the corresponding generator as in Eq.(4) L k = f( x(a m)) a k. (5) am=0 There exists, like in the case of the one parameter Lie groups, a one to one correspondence between the tangent space G := {a 1 L 1 +a L +...+a k L k a 1, a,..., a k IR} and the group elements via the exponential map k g(a) = exp( a j L j ) (53) The commutation of the group elements is equivalent to the commutation of the corresponding generators 3 [g(a i ), g(a j )] = 0 [L i, L j ] = 0. (54) In order to illustrate the construction of simultaneous eigenfunctions let us consider two commutating generators L 1, L and let ψ the eigenfunction of L 1 and γ the corresponding eigenvalue L 1 ψ = γψ (55) The full class of solutions are e λz, with z IC. Since form a complete basis for all square integrable functions on a compact interval we restrict ourselves to the solutions in Eq. (50). 3 The Lie bracket [A, B] of two operators is defined as: [A, B] = AB BA

16 6.1 Steerable Filters in D 14 Since both operators commutate, g ψ is again an eigenfunction of g 1 with the same eigenvalue L 1 L ψ = L L 1 ψ = L γψ = γl ψ. (56) If ψ is the only eigenfunction belonging to the eigenvalue γ, L ψ has to be proportional to ψ L ψ = κψ (57) and thus ψ is also an eigenfunction of L. If the eigenvalue is degenerated the space spanned by the corresponding eigenfunctions there can be found a linear combination of the basis functions such that it commutates with the eigenfunction of L. The case for more than two commutating group elements is straight forward. An quite different steerable approach based on group theory has been developed by Hel-Or and Teo [9]. The approach is two fold. First, a test in order to prove if a given basis span an invariant space of an Abelian Lie group has been developed. If so, the corresponding interpolation functions can be automatically derived. Secondly, the present approach delivers automatically a basis for a given function even for non Abelian Lie groups. But if this basis is not necessary the smallest required one and it does not contain an algorithm for commutating the corresponding interpolation functions in the non Abelian case. If a set of basis functions b j (x) (of the steerable filter) form a basis of a representation of the group, the space Φ =span(b j (x)) spanned by these basis functions is invariant under the group action g(a)φ = A(a)Φ (58) where A(a) is a square matrix. Since Lie group actions can be expressed by power series of generators (see Eq.(43)), Φ has to be also invariant under the corresponding generator 4 L i Φ = B i Φ (59) where B i is again a square matrix and the index i labels the corresponding parameter. The relation between A(a) and {B 1, B,..., B k } is given by the exponential map: A(a) = e a kbk e a k 1B k 1...e a 1B 1 (60) From this observation Hel-Or and Teo concluded the following recipe to compute the basis functions: Derive the generators L 1, L,..., L k of a given Lie group. Verify for each generator that where B i is some n n matrix. L i Φ = B i Φ (61) 4 Hel-Or and Teo uses the conjugate operator which arose from the assumption of a transformed signal. The transformation is then shifted towards the filter operation via conjugation in the scalar product of the measuring process. For details see [9]

17 6.1 Steerable Filters in D 15 If so, then the basis functions span an invariant space and the interpolation functions can be conducted from the rows of the interpolation matrix A(a) = e a kb k e a k 1B k 1...e a 1B 1 (6) The next example, rotation in D, shows the relation to the original steerability approach of Freeman. Let Λ be a vector whose coefficients are the partial derivatives of a D Gaussian function in the x and y direction. Λ(x, y) = ( x e (x +y )/ y e (x +y )/ Applying the generator L ϕ = ϕ ) = ( xe (x +y )/ ye (x +y )/ to Λ(r, ϕ) yields: ) = ( r cos(ϕ)e r / r sin(ϕ)e r / ) (63) L ϕ Λ(r, ϕ) = ( r sin(ϕ)e r / r cos(ϕ)e r / ) = ( ) Λ(r, ϕ) (64) The interpolation function are given by the row of the matrix A ( ) RˆΦ = e Bτ cos(τ) sin(τ) Λ(x, y) = Λ(x, y) (65) sin(τ) cos(τ) Unfortunately, there is no such a recipe for multi-parameter groups like the rotation in 3D since this concept works only for Abelian parameter groups. Group Operator Generator Invariant Space Brigthnes change g b (a)s(x, y) = e a s(x, y) L b = I all the functions x-translating g tx (a)s(x, y) = s(x + a, y) x {ψ p(y)x p e αx } for 0 p k. y-scaling g tx (a)s(x, y) = s(e a x, y) x {ψ x p(y)x α ln(x)} for 0 p k. Rotation g r(a)s(x, y) = s(x cos(a) y sin(a), x {ψ y x p(r)ϕ p e αϕ } for 0 p k. x cos(a) + y sin(a)) = ϕ Uniform-scaling g t(a)s(x, y) = s(e a x, e a y) x x y {ψ p(r)r α (ln r) p } for 0 p k. Table 1: Several examples of one parameter groups, their operators, generators and the invariant spaces Furthermore, Hel-Or and Teo developed a technique called generator tree (or generator chain in the one parameter case) in order to determine the basis functions for a given function which works also for non Abelian Lie groups. As in the case of the Abelian Lie group there exists a correspondence between the elements of the tangent space and the group elements via an exponential map R(a 1, a,..., a k ) = exp(a k L k ) exp(a k 1 L k 1 )... exp(a 1 L 1 ) = ( k i=1 l=0 a l ) i l! Ll i But different ordering of the exponential maps lead to different parameterizations of the group illustrated by the example of the Lie group of a translation and a scaling in one dimension. The corresponding generators are L 1 = x for the translation and L = (66)

18 6. Steerable filters in 3D 16 x x for the scaling and the commutator yields [L 1, L ] = L 1. Different orders of the exponential maps lead to e a 1L 1 e a L s(x) = s(e a x a 1 ) (67) e a L 1 e a 1L 1 s(x) = s(e a (x a 1 )) (68) Therefore it is not possible to apply Eq.(6) in this case but nonetheless a basis for an invariant space can be generated also in this case. Since the group elements are made up from power series of generators the application of a generator to a function is again an element of the invariant space. The idea of constructing invariant spaces to apply all possible combination of generators to the filter kernel until the resulting function is linear dependent to the ones already constructed by this procedure. It has been proven that further application of any generator to those functions only provides linear dependent functions. Thus we must only apply all permutations of the generators as long as the resulting functions are linear independent. The procedure is illustrated by the function f(x 1, x ) = sin(x 1 ) sin(x ) and the Abelian group of translation in x 1 and x direction. The generators of the group are L x1 = x 1 and L x = x. First we set b 1 = f and generate the other basis functions with L x1 b 1 = cos(x 1 ) sin(x ) = b, L x 1 b 1 = sin(x 1 ) sin(x ) = b 1 (69) L x b 1 = sin(x 1 ) cos(x ) = b 3, L x b 1 = sin(x 1 ) sin(x ) = b 1 (70) L x1 L x b 1 = cos(x 1 ) cos(x ) = b 4, L x1 L x f = cos(x 1 ) sin(x ) = b (71) L x L x 1 b 1 = cos(x 1 ) sin(x ) = b 3. (7) Thus four basis functions are needed to shift f in any direction. 6. Steerable filters in 3D In the following sections the current approaches of steerable three dimensional functions are presented. Whereas in D the direction of a filter is easily described by the angle between the direction and the x 1 axis in 3D, there are many different ways to parameterize a direction. Freeman uses the direction cosines between the direction of the filter kernel and the coordinate axis whereas the other authors use spherical coordinates. The direction cosines are sufficient to determine every rotation in 3D and consequently every direction. Let e 1, e, e 3 and u 1, u, u 3 be two orthogonal complete systems of the IR 3. Then every vector r in IR 3 can be developed as: 3 3 r = x j e j = x j u j (73) The basis vectors u 1, u, u 3 can also be developed according to e 1, e, e 3 : 3 u k = d kj e j (74)

19 6. Steerable filters in 3D 17 Multiplying this equation with e m yields to: 3 3 e m u k = d kj e m e j = d mj δ mk = d mk (75) Since e i u j = cos(ϕ ij ) (ϕ ij is the angle between e i and u j ) the transformation of the co-ordinate system can fully be described by the direction cosines d ij = cos(ϕ ij ) Axial Symmetric Steerable Filters In Freemans [6] 3D steerable approach the orientation of the filter kernel is parameterized by the direction cosines between the axis defining the direction and the principal axes denoted as α, β, γ. It covers filter kernels with an axis of rotational symmetry of the form f(x α, β, γ) = P N (x α, β, γ) W (r) (76) with an even or odd polynomial P N (x) of order N times an arbitrary spherical symmetric function W (r). Rotational symmetry axes ŵ means that an arbitrary rotation around this axis does not change the function The steering constraint becomes this case f(x α, β, γ) =! Rŵ(ϕ)f(x θ) = f(x θ) (77) M i=1 a i (α, β, γ)f(x α i, β i, γ i ) (78) Inserting equation(76) into equation (78) leads, after some calculation, to the matrix equation which determines the interpolation functions α N α α N 1 1 N α N... α N M a 1 (α, β, γ) β α α N 1 1 N 1 β 1 α N 1 β... αm N 1 β M a (α, β, γ) γ = α N 1 1 γ 1 α N 1 γ... αm N 1 γ M a 3 (α, β, γ) γ N γ1 N γ N... γm N a M (α, β, γ) The minimum number of basis functions (for the detailed derivative see [6]) is N(N +1)/. 6.. Steerability Based on Non Orthogonal Basis Functions Anderson (1993) [] designed a steerable filter with non orthogonal basis functions. This drawback is taken into account in order to gain basis functions which are the rotated versions of itself. Furthermore he states that the interpolation functions become much simpler 5 in contrast using spherical harmonics as basis functions. Let G(r) be a spherical symmetric function and ˆn li the the orientation of the i-th basis filter of order l. Then, the basis functions of order l are defined by b li (x) = G(r)(ˆn li ˆx) l (79) 5 Later it is shown that this is not true; both interpolation functions are composed out of trigonometric functions of different order.

20 18 The minimum number of basis functions to steer a function composed by basis functions of order l is (l + 1)(l + )/ since every even or odd polynomial can be composed by (l + 1)(l + )/ even or odd terms. The interpolation functions a i (u) for basis function b 1i (x) directed along the principal axis ˆn 10 = (1, 0, 0) ˆn 11 = (0, 1, 0) ˆn 1 = (0, 0, 1) are equal to coordinates of the unit vector in this direction. The basis function steered in the direction of the principal axis are consequently B 10 (x) = G(r)(ˆn 10 ˆx) = r 1 G(ρ)x 1 (80) B 11 (x) = G(r)(ˆn 11 ˆx) = r 1 G(ρ)x (81) B 1 (x) = G(r)(ˆn 1 ˆx) = r 1 G(r)x 3 (8) The first order basis function steered in an arbitrary direction ˆv = (v 1, v, v 3 ) is expressed as B 1 (x) = G(r)(ˆv ˆx) = ρ 1 G(r)(v 1 u 1 + v u + v 3 u 3 ) (83) where x = (x 1, x, x 3 ) defines the signal vector. The interpolation functions a = (a 1, a, a 3 ) can then be deduced by expressing the basis function rotated in an arbitrary direction by a linear combination of the basis function. B 1 (x) = a 1 B 10 (x) + a B 11 (x) + a 3 B 1 (x) (84) Thus, the interpolation functions are equal to the components of the direction vector ˆv. The interpolation functions for higher order basis functions can be calculated by the same procedure. The interpolation functions are trigonometric functions of order equal to the order of the basis function. 7 AN EXTENDED STEERABLE APPROACH BASED ON LIE GROUP THEORY In the following section, we present our steerable filter approach based on Lie group theory covering all recent approaches developed so far. It delivers for Abelian Lie groups and for compact non-abelian Lie groups the minimum required number of basis functions and the corresponding interpolation functions. In order to complete the steerable approach the case of non-abelian, non-compact Lie groups has to be considered separately. After presenting our concept, its relation to recent approaches is discussed and some examples are presented. 7.1 Conditions for Steerability In the following we show the steerability of all filter kernels h : IR N IC which are expandable according to a finite number M of basis function B = {b j (x)} of a subspace

21 7.1 Conditions for Steerability 19 V :=span{b} L of all quadratic integrable functions. Since every element of L can arbitrary exactly be approximated by a finite number of basis functions, we consider, at least approximately, all quadratic integrable filter kernels. With the notation of the inner product, in L and the Fourier coefficients c j = h(x), b j (x) the expansion of h(x) reads M h(x) = c j b j (x). (85) Furthermore, every basis function b j (x) V shall belong to an invariant subspace U V with respect to a Lie group G transformation. Then, h(x) is steerable with respect to G. We have assigned the preconditions such that this statement can be easily verified. Let D(g) denote the representation of g G in the function space V and D(g) the representation of G in the N-dimensional signal space. It is easy to verify that the transformed function D(g)h(x) equals the linear combination of the transformed basis functions ( ) D(g)h (x) = h D(g) 1 x (86) = = M c j b j (D(g) 1 x) (87) M c j D(g)b j (x). (88) Since every basis function b j (x) is, per definition, part of an invariant subspace, the transformed version D(g)b j (x) can be expressed by a linear combination of the subspace basis. Let denote m(j) the mapping of the index j of the basis function b j (x) onto the lowest index of the basis function belonging to the same subspace and d(j) the mapping of the index of the basis function b j (x) onto the dimension d j of its invariant subspace. The transformed basis function D(g)b j (x) can be expressed, with the previous definition of m(j) and d(j), and the coefficients of the linear combination w jk (g) as D(g)b j (x) = m(j)+d(j) 1 k=m(j) Inserting equation (89) into equation (88) yields D(g)h(x) = M i=1 m(j)+d(j) 1 c i k=m(j) w jk (g)b k (x). (89) w jk (g)b k (x). (90) The double sum can be written such that all coefficients belonging to the same basis function are grouped together, where L denotes the number of invariant subspaces in V D(g)h(x) = b k (x) c j w jk (g) (91) b k U 1 w jk U 1 + b k (x) c j w jk (g) +... b k U w jk U + b k (x) c j w jk (g). b k U L w jk U L

22 7. The Basis Functions 0 Thus, in order to steer the function h we have to consider all basis functions spanning the L subspaces. 7. The Basis Functions The next question arising is how to obtain the appropriate basis functions. We require the basis functions to span finite dimensional invariant subspaces. Furthermore, the invariant subspaces are desired to be as small as possible in order to lower computational costs. Group theory provides the solution of this problem and the functions fulfilling these requirements are, per definition, the basis of an irreducible representation of the Lie group. This has already pointed out by Michaelis and Sommer [13] and a method for generating such a basis for Abelian Lie groups has been proposed. We extend this method for the case of non-abelian, compact Lie groups. The case of non-abelian, non-compact groups is discussed in subsection Basis Functions for Compact Lie Groups The invariant space spanned by an irreducible basis cannot be decomposed further into invariant subspaces and thus, forming a minimum number of basis functions for the steerable function. Michaelis and Sommer showed that such a basis is given by the eigenfunctions of the generators in case of Abelian Lie groups. Since the generators of a non-abelian groups do not commutate and thus have no simultaneous eigenfunctions, the method does not work in this case any more. But their framework can be extended with a slight change to compact non-abelian groups. Instead of constructing the basis functions from the simultaneous eigenfunctions of the generators of the group, the basis function can also be constructed by the eigenfunctions of a Casimir operator C of the corresponding Lie group. In order to define the Casimir operator we first have to introduce the Lie bracket, or commutator, of two operators [D(a), D(b)] = D(a)D(b) D(b)D(a). (9) Operators commutating with all representations of the group elements are denoted as Casimir operators [C, D(g)] = 0 g G. (93) Let {b m (x)}, m = 1,..., d α denote the set of eigenfunctions of C corresponding to the same eigenvalue α. Then, every transformed basis function D(g)b i (x) is an eigenfunction with the same eigenvalue α CD(g)b i (x) = D(g)Cb i (x) (94) = D(g)αb i (x) = αd(g)b i (x). Thus, {b m (x)} forms a basis of a d α dimensional invariant subspace U α. Any transformed element of this subspace can be expressed by a linear combination of basis functions of this subspace D(u)b m (x) = d α w mj b j (x). (95)

23 7.3 The Interpolation Functions 1 Thus, we have found a method for constructing invariant subspaces also for non-abelian groups. A Casimir operator is constructed by a linear combination of products of generators of the corresponding Lie group where n denotes the number of generators C = ij f ij L i L j, i, j = 1,..., n. (96) The coefficients f ij are solved by the constraints [C, L k ] = 0, k = 1,..., n. (97) In the following we choose the Casimir operator to be hermitian, thus its eigenfunctions constitute a complete orthogonal basis of the corresponding function space. If the considered group is the highest symmetry group of the Casimir operator, i.e. there exists no operation which does not belong to the group and under which the Casimir operator is invariant, then the eigenfunctions are basis functions of irreducible representation [17]. After computing one eigenfunction b 1 (x) corresponding to the eigenvalue α we can construct all other basis functions of this invariant subspace by applying all possible combinations of generators of the Lie group to b 1 (x). The sequence of generators is stopped when the resulting function is linear dependent from the ones which have already been constructed. This equals to the method for constructing the basis functions proposed by Teo and Hel-Or [9] except for the fact that they propose to apply this procedure directly to the steerable function h(x). 7.. Basis Function for Non-Compact Lie Groups Since only Abelian Lie groups and compact non-abelian Lie groups are proofed to own complete irreducible representations, i.e. the representation space falls into invariant subspaces, we have to treat the case of non-abelian, non-compact groups separately. Since we do only require an invariant subspace and not an entirely irreducible representation we can easily construct such a space from a polynomial basis of the space of square integrable functions. The order of a polynomial term does not change by an arbitrary Lie group transformation and thus the basis of a polynomial term constitute a basis for a steerable filter. In order to steer an arbitrary polynomial we have to determine the terms of different order. The sum of the basis functions of the corresponding invariant subspaces are a basis for the steerable polynomial. We can now construct for every Lie group transformation the corresponding basis for a steerable filter. For Abelian groups and compact groups we chose the basis from the eigenfunction of the Casimir operator whereas for all other groups we choose a polynomial basis. The next section addresses the question how to combine these basis functions in order to steer the resulting filter kernel with respect to any Lie group transformation. 7.3 The Interpolation Functions The computation of the interpolation functions {a j (g)} can already be deduced from equ.(91). In order to obtain the interpolation function corresponding to the basis function

24 7.3 The Interpolation Functions b m (x) the transformed version of the original filter kernel h(x) has to be projected onto b m (x) a m (g) = D(g)h(x), b m (x) (98) M = D(g) c n b n (x), b m (x) n=1 M = c n D(g)b n (x), b m (x). n=1 The relation between {c k } M 1 and {a k } L 1 is a linear map P IRM L with the matrix elements of the coefficient vector onto the interpolation function vector (P) ij = D(g)b i (x), b j (x) (99) c := (c 1, c,..., c M ) (100) a(g) := (a 1 (g), a (g),..., a N (g)). (101) As already pointed out by Michaelis and Sommer [13], the basis functions have not to be the transformed versions of the filter kernel as assumed in other approaches [6, 16]. It is sufficient that the synthesized function is steerable. If it is nonetheless desired to design basis functions which are transformed versions h g (x) := D(g)h(x) of the filter kernel h(x) a basis change is sufficient M M h(x) = a j (g)b j (x) = ã j h gj (x). (10) The relation between {a j (g)} and {ã j (g)} can be found by a projection of both sides of equation (10) on b m (x) a m (g) = = M a j (g)h gj (x), b m (x) M ã j (g) h gj (x), b m (x). }{{} B jm (103) This can be written as a matrix/vector operation with ã T := (ã 1, ã,..., ã n ) and a T := (a 1, a,..., a n ) The matrix B describing the basis change is invertible a = Bã. (104) ã = B 1 a (105) and the steerable basis can be designed as steered versions of the original filter kernel.

25 7.4 Relation to Recent Approaches Relation to Recent Approaches We present a steerable filter approach for computing the basis functions and interpolation functions for arbitrary Lie groups. Since two steerable filter approaches based on Lie group theory [13, 9] have already been developed, the purpose of this section is to examine their relation to our approach. Freeman and Adelson [6] consider steerable filters with respect to the rotation group in D and 3D, respectively. For the D case they propose a Fourier basis (of the function space) times a rotational invariant function as well as a polynomial basis (of the function space) times a rotational invariant function as basis functions of the steerable filter. They realized that the minimum required set of basis functions depend on the kind of basis itself but their approach failed to explain the reason for it. Michaelis and Sommer [13] answer this question based on Lie group theory: the basis of an irreducible group representation span an invariant subspace of minimum size. Since the Fourier basis is the basis for an irreducible representation of the rotation group SO(), the required number of basis function is less as for the polynomial basis. Our approach can be considered as an extension of the approach of Michaelis and Sommer from Abelian Lie groups to arbitrary Lie group transformation. Whereas the approach of Michaelis and Sommer construct the basis function from the generators of the group, our approach uses a Casimir operator. Since the generators of an Abelian Lie group commutate with each other, their linear combination constitute a Casimir operator and thus both methods become equal in this case. But our method also works for the case of compact groups, since in this case, the Casimir operator delivers finite dimensional invariant subspaces [17]. For non-compact, non-abelian groups we showed that polynomials serve always as basis for an invariant subspace. The approach of Teo and Hel-Or significantly differs from our approach in the way how the invariant subspace is generated. The basis functions of the invariant subspace are constructed by applying all combinations of Lie group generators to the function that is to be made steerable. A certain sequence of generators, denoted as generator chain in case of Abelian Lie groups and generator trees in the case of non-abelian Lie groups, is stopped if the resulting function is linearly dependent to the basis functions which have already been constructed. In the following, we will show that this approach may fail. Let us consider the function h(x, y) = exp( x ) and the rotation in D as the group transformation. Applying the generator chain which is simply the successive application of the group generator L = x y y x does not converge since h(x, y) is not expandable by a finite number of basis functions of a representation of the rotation group. In our approach, h(x, y) is first approximated by a finite number of basis functions of an finite dimensional invariant subspace. This basis is then steerable by construction. 8 Examples of steerable filter kernels This section discusses examples showing how to apply the theoretical framework derived so far.

26 8.1 Example of a steerable function in D 4 Table. 7.1: Several examples of Lie groups, the corresponding operator(s), generator(s), Casimir operator(s) and basis functions. Terminology: T N : translation group in the N-dimensional Signal space; SO(N): special orthogonal group; U N : uniform scaling group; S N : shear group Group Operators Generators Casimir operator basis functions T N D(a)h(x) = h(x a) {L i = x i } C = { ( )} N i=1 L i exp jn T x SO() D(α)h(r, ϕ) = h(r, ϕ α) L = ϕ C = L {f k (r) exp (jkϕ)} SO(3) Rh(x) = h(r 1 x) {L k = x j x i x i x j } C = 3 i=1 L i {f k (r)y lm (θ, ϕ) } U N D(α)h(x) = h(e α x) {L i = {x i x i } C = N i=1 L i {r k } S N D(u)h(x, t) = h(x ut, t) {L i = t x i } C = N i=1 L i {f k (t) exp (jkx/t)} 8.1 Example of a steerable function in D First of all, let us apply our steerable approach to D functions with respect to rotations. A real quadratic integrable basis is given by which can be derived from the Casimir operator {e r cos(nϕ), e r sin(nϕ)} n IN (106) C = ϕ (107) of the rotation group SO(). For every nϕ the two functions cos(nϕ) and sin(nϕ) span an invariant subspace. The map from the coefficient vector c to the interpolation functions vector a(θ) is according to Eq.(99): cos(θ) sin(θ) sin(θ) cos(θ) cos(θ) sin(θ) P = sin(θ) cos(θ) (108) cos(3θ) sin(3θ) sin(3θ) cos(3θ) The following figures illustrate the first four basis functions and a function extended to this basis and a steered version of the function. Fig. 8.1: Contour plot of the basis functions ; from left to right: b 1 = e r cos(ϕ), b = e r sin(ϕ),b 3 = e r cos(ϕ) and b 4 = e r sin(ϕ)

27 Steerable Function in 3D Steerable Function in 3D Now we will stack to the case interesting for motion estimation, the rotation in 3D. The Lie group is denoted by SO(3). The generators correspond to a rotation around the principle axes {x1, x, x3 }. Lxi = xj xk xk xj (109) and fulfill the commutator relation [Li, Lj ] = jεijk Lk (110) L = L1 + L + L3 (111) A Casimir operator is given by It turns out that SO(3) is not the highest symmetry letting L invariant The highest symmetry of the Casimir operator is SO(4). Nonetheless, it has been shown that the eigenfunctions, called spherical harmonics, form an irreducible basis. The eigenfunctions with the same eigenvalue of L form an invariant subspace. The corresponding eigenvalue are denoted as α = `(` + 1). Within this subspace the different eigenfunctions can be classified by the eigenfunctions of one further generator which is usually L3. L3 Y`m = ml3 ` m ` (11) The spherical harmonics build a complete set of orthogonal basis f unctions on the unit sphere S.

28 8. Steerable Function in 3D 6 Fig. 8.: Spherical density plot of the real value combination of the spherical harmonics of order one and two. l D l (θ, ϕ) D l (x 1, x, x 3 ) Symbol 0 1 4π 1 s π cos(θ) 3 4π sin(θ) cos(ϕ) 3 4π sin(θ) sin(ϕ) 3 16π (3 cos (θ) 1) 4π sin(θ) cos(θ) cos(ϕ) 15 4π sin(θ) cos(θ) sin(ϕ) 15 16π sin (θ) cos(ϕ) 15 16π sin (θ) sin(ϕ) 4π z 4π r d z x 4π r d x y 4π r d y 5 3z r 16π d r z xz 4π d r xz xz 4π d r yz 15 xz 4π d r x y 15 x y 16π d r xy Table. 8.1: Real orthonormal combinations of the spherical harmonics up to second order in spherical coordinates and cartesian coordinates. l refers to the eigenvalues of the Casimir operator L The transformation matrix P for the 3D rotation group A basis set for a steerable function of the SO(3) group is given by spherical harmonics multiplied with an arbitrary radial function. The spherical harmonics are polynomials defined on the unit sphere and it is convenient to express them with spherical coordinates. In order to express their rotated version it is more convenient to express them in cartesian coordinates. Since the spherical harmonics lie on the unit sphere the coordinates x 1, x, x 3 fulfill the constraint: x 1 + x + x 3 = 1 (113) The connection between the spherical and the cartesian coordinates are (spherical into cartesian coordinates): (cartesian into spherical coordinates): x 1 = r sin(θ) cos(φ) x = r sin(θ) sin(φ) x 3 = r cos(θ) r = x 1 + x + x 3 ( ) x φ = arctan x 1 x 1 θ = arctan + x x 3 Let x 1, x, x 3 be the cartesian coordinates of the rotated coordinate system. The rotated coordinate system can then be expressed by the original one.

29 8. Steerable Function in 3D 7 x 1 x x 3 = d 11 d 1 d 13 d 1 d d 3 d 31 d 3 d 33 x 1 x x 3 Let R ω be the rotation matrix and R ω is the the rotation operator in the function space around the axes w. The rotated versions of the spherical harmonics are of the following form (shown for the spherical harmonic Y, ): R ω [Y, (x 1, x, x 3 )] = Y, ( x 1, x, x 3 ) = x 1 x r = (d 11 x 1 + d 1 x + d 13 x 3 ) (d 1 x 1 + d x + d 3 x 3 )r = (d 11 d 1 x 1 + d 1 d x + d 13 d 3 x 3 +(d 11 d + d 1 d 1 )x 1 x +(d 11 d 3 + d 13 d 1 )x 1 x 3 +(d 1 d 3 + d 13 d )x x 3 )r = d 11 d 1 sin(θ) cos(ϕ) + d 1 d sin(θ) sin(ϕ) + d 13 d 3 cos(θ) +(d 11 d + d 1 d 1 ) sin(θ) cos(ϕ) sin(ϕ) +(d 11 d 3 + d 13 d 1 ) sin(θ) cos(θ) cos(ϕ) +(d 1 d 3 + d 13 d ) cos(θ) sin(θ) sin(ϕ) Projecting the rotated version onto a spherical harmonic yields the corresponding interpolation function. The P matrix relating to the coefficients of the expansion of the filter kernel in a basis of the form {g lm (r)y lm (ϕ, θ)} up to order with a radial orthonormal radial part g lm g l m = δ l lδ m m is a block diagonal matrix with the elements P = P 1 = P P P (114) P 0 = 1 (115) d 11 d 1 d 13 d 1 d d 3 d 31 d 3 d 33 (116) P = d 31 d 3 +d 33 d 11 d 31 +d 1 d 3 d 13 d 33 3d31d 33 3d3d (d 31 d 3) 3d31d 3 3 d 13d 31 + d 11d 33 d 13d 3 + d 1d 33 d 11d 31 d 1d 3 d 1d 31 + d 11d 3 d 1d 31 +d d 3 d 3 d 33 3 d 3d 31 + d 1d 33 d 3d 3 + d d 33 d 1d 31 + d d 3 d d 31 + d 1d 3 d 11 d 1 +d 13 +d 1 +d d 3 1 d 11d 13 + d 1d 3 d 1d 13 + d d 3 d 11 d 1 d 1 +d d 11d 1 + d 1d d 11 d 1 +d 1 d d 13 d 3 3 d 13d 1 + d 11d 3 d 13d + d 1d 3 d 11d 1 d 1d d 1d 1 + d 11d (117) The P 1 matrix for the basis functions of order 1 is the rotation matrix R.

30 8 8.. A Steerable Function in 3D As an example of this recipe we considered the second partial derivative in x1 direction of a 3D Gaussian functiong(x) = e (x1 +x +x3 ) Gx1 (x) = (4x1 )e (x1 +x +x3 ) (118) As a rotational invariant basis we choose the 3D Gaussian function as the radial part and spherical harmonics as the angular part {Ylm e (x1 +x +x3 ) }. There are three basis functions which are not orthogonal on Gx1 (x), Y00,Y10 and Y : Gx1 (x) 9 5π π π = Y00 Y10 + Y e (x1 +x +x3 ) (119) Motion estimation with steerable filters We come now to steerable filters in context of motion estimation. So far we have extended the steerable approaches and have computed the basis and interpolation functions of the rotation group. The question how to estimate the optical flow field with such a steerable filter will be answered in this section. We present a local estimator already developed in [1]. 9.1 The generalized structure tensor In this section we present the extension of the classical structure tensor [3] to general tensor based on directional filters [1]. One main characteristic of directional filters hr (x), pointing in direction r, is steerability hr (x) = N X aj (r)bj (x). Applying a steerable filter to the signal s(x) yields: hr (x) s(x) = N X aj (r) (bj (x) s(x)).

A UNIFIED THEORY FOR STEERABLE AND QUADRATURE FILTERS

A UNIFIED THEORY FOR STEERABLE AND QUADRATURE FILTERS A UNIFIED THEORY FOR STEERABLE AND QUADRATURE FILTERS Kai Krajsek J. W. Goethe University, Visual Sensorics and Information Processing Lab Robert Mayer Str.2-4, D-60054 Frankfurt am Main, Germany krajsek@vsi.cs.uni-frankfurt.de

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Justin Campbell August 3, 2017 1 Representations of SU 2 and SO 3 (R) 1.1 The following observation is long overdue. Proposition

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Quantum Field Theory III

Quantum Field Theory III Quantum Field Theory III Prof. Erick Weinberg January 19, 2011 1 Lecture 1 1.1 Structure We will start with a bit of group theory, and we will talk about spontaneous symmetry broken. Then we will talk

More information

Math review. Math review

Math review. Math review Math review 1 Math review 3 1 series approximations 3 Taylor s Theorem 3 Binomial approximation 3 sin(x), for x in radians and x close to zero 4 cos(x), for x in radians and x close to zero 5 2 some geometry

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

A group G is a set of discrete elements a, b, x alongwith a group operator 1, which we will denote by, with the following properties:

A group G is a set of discrete elements a, b, x alongwith a group operator 1, which we will denote by, with the following properties: 1 Why Should We Study Group Theory? Group theory can be developed, and was developed, as an abstract mathematical topic. However, we are not mathematicians. We plan to use group theory only as much as

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Mathematical Foundations

Mathematical Foundations Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Rotational motion of a rigid body spinning around a rotational axis ˆn;

Rotational motion of a rigid body spinning around a rotational axis ˆn; Physics 106a, Caltech 15 November, 2018 Lecture 14: Rotations The motion of solid bodies So far, we have been studying the motion of point particles, which are essentially just translational. Bodies with

More information

Topics in Representation Theory: Fourier Analysis and the Peter Weyl Theorem

Topics in Representation Theory: Fourier Analysis and the Peter Weyl Theorem Topics in Representation Theory: Fourier Analysis and the Peter Weyl Theorem 1 Fourier Analysis, a review We ll begin with a short review of simple facts about Fourier analysis, before going on to interpret

More information

Symmetries, Groups, and Conservation Laws

Symmetries, Groups, and Conservation Laws Chapter Symmetries, Groups, and Conservation Laws The dynamical properties and interactions of a system of particles and fields are derived from the principle of least action, where the action is a 4-dimensional

More information

Hilbert Space Problems

Hilbert Space Problems Hilbert Space Problems Prescribed books for problems. ) Hilbert Spaces, Wavelets, Generalized Functions and Modern Quantum Mechanics by Willi-Hans Steeb Kluwer Academic Publishers, 998 ISBN -7923-523-9

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

1. Rotations in 3D, so(3), and su(2). * version 2.0 *

1. Rotations in 3D, so(3), and su(2). * version 2.0 * 1. Rotations in 3D, so(3, and su(2. * version 2.0 * Matthew Foster September 5, 2016 Contents 1.1 Rotation groups in 3D 1 1.1.1 SO(2 U(1........................................................ 1 1.1.2

More information

26 Group Theory Basics

26 Group Theory Basics 26 Group Theory Basics 1. Reference: Group Theory and Quantum Mechanics by Michael Tinkham. 2. We said earlier that we will go looking for the set of operators that commute with the molecular Hamiltonian.

More information

Problem 1A. Use residues to compute. dx x

Problem 1A. Use residues to compute. dx x Problem 1A. A non-empty metric space X is said to be connected if it is not the union of two non-empty disjoint open subsets, and is said to be path-connected if for every two points a, b there is a continuous

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Elementary realization of BRST symmetry and gauge fixing

Elementary realization of BRST symmetry and gauge fixing Elementary realization of BRST symmetry and gauge fixing Martin Rocek, notes by Marcelo Disconzi Abstract This are notes from a talk given at Stony Brook University by Professor PhD Martin Rocek. I tried

More information

Symmetries, Fields and Particles. Examples 1.

Symmetries, Fields and Particles. Examples 1. Symmetries, Fields and Particles. Examples 1. 1. O(n) consists of n n real matrices M satisfying M T M = I. Check that O(n) is a group. U(n) consists of n n complex matrices U satisfying U U = I. Check

More information

Chimica Inorganica 3

Chimica Inorganica 3 A symmetry operation carries the system into an equivalent configuration, which is, by definition physically indistinguishable from the original configuration. Clearly then, the energy of the system must

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis.

1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis. Questions on Vectors and Tensors 1. Basic Operations Consider two vectors a (1, 4, 6) and b (2, 0, 4), where the components have been expressed in a given orthonormal basis. Compute 1. a. 2. The angle

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

Review of Linear System Theory

Review of Linear System Theory Review of Linear System Theory The following is a (very) brief review of linear system theory and Fourier analysis. I work primarily with discrete signals. I assume the reader is familiar with linear algebra

More information

df(x) = h(x) dx Chemistry 4531 Mathematical Preliminaries Spring 2009 I. A Primer on Differential Equations Order of differential equation

df(x) = h(x) dx Chemistry 4531 Mathematical Preliminaries Spring 2009 I. A Primer on Differential Equations Order of differential equation Chemistry 4531 Mathematical Preliminaries Spring 009 I. A Primer on Differential Equations Order of differential equation Linearity of differential equation Partial vs. Ordinary Differential Equations

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p LECTURE 1 Table of Contents Two special equations: Bessel s and Legendre s equations. p. 259-268. Fourier-Bessel and Fourier-Legendre series. p. 453-460. Boundary value problems in other coordinate system.

More information

Pre-School Linear Algebra

Pre-School Linear Algebra Pre-School Linear Algebra Cornelius Weber Matrix Product The elements of a matrix C = AB are obtained as: c mn = Q a mq b qn q which is written in matrix notation as (small capital letters denote number

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Tensor Analysis in Euclidean Space

Tensor Analysis in Euclidean Space Tensor Analysis in Euclidean Space James Emery Edited: 8/5/2016 Contents 1 Classical Tensor Notation 2 2 Multilinear Functionals 4 3 Operations With Tensors 5 4 The Directional Derivative 5 5 Curvilinear

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Introduction and Vectors Lecture 1

Introduction and Vectors Lecture 1 1 Introduction Introduction and Vectors Lecture 1 This is a course on classical Electromagnetism. It is the foundation for more advanced courses in modern physics. All physics of the modern era, from quantum

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Some elements of vector and tensor analysis and the Dirac δ-function

Some elements of vector and tensor analysis and the Dirac δ-function Chapter 1 Some elements of vector and tensor analysis and the Dirac δ-function The vector analysis is useful in physics formulate the laws of physics independently of any preferred direction in space experimentally

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 27th, 21 From separation of variables, we move to linear algebra Roughly speaking, this is the study

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Particles I, Tutorial notes Sessions I-III: Roots & Weights

Particles I, Tutorial notes Sessions I-III: Roots & Weights Particles I, Tutorial notes Sessions I-III: Roots & Weights Kfir Blum June, 008 Comments/corrections regarding these notes will be appreciated. My Email address is: kf ir.blum@weizmann.ac.il Contents 1

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

1 The postulates of quantum mechanics

1 The postulates of quantum mechanics 1 The postulates of quantum mechanics The postulates of quantum mechanics were derived after a long process of trial and error. These postulates provide a connection between the physical world and the

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

Tensors, and differential forms - Lecture 2

Tensors, and differential forms - Lecture 2 Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19 Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

Supporting Information

Supporting Information Supporting Information A: Calculation of radial distribution functions To get an effective propagator in one dimension, we first transform 1) into spherical coordinates: x a = ρ sin θ cos φ, y = ρ sin

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Week 6: Differential geometry I

Week 6: Differential geometry I Week 6: Differential geometry I Tensor algebra Covariant and contravariant tensors Consider two n dimensional coordinate systems x and x and assume that we can express the x i as functions of the x i,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

FFTs in Graphics and Vision. Groups and Representations

FFTs in Graphics and Vision. Groups and Representations FFTs in Graphics and Vision Groups and Representations Outline Groups Representations Schur s Lemma Correlation Groups A group is a set of elements G with a binary operation (often denoted ) such that

More information

Controllability, Observability & Local Decompositions

Controllability, Observability & Local Decompositions ontrollability, Observability & Local Decompositions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Lie Bracket Distributions ontrollability ontrollability Distributions

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

G : Quantum Mechanics II

G : Quantum Mechanics II G5.666: Quantum Mechanics II Notes for Lecture 5 I. REPRESENTING STATES IN THE FULL HILBERT SPACE Given a representation of the states that span the spin Hilbert space, we now need to consider the problem

More information

Classical Mechanics. Luis Anchordoqui

Classical Mechanics. Luis Anchordoqui 1 Rigid Body Motion Inertia Tensor Rotational Kinetic Energy Principal Axes of Rotation Steiner s Theorem Euler s Equations for a Rigid Body Eulerian Angles Review of Fundamental Equations 2 Rigid body

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

2. Signal Space Concepts

2. Signal Space Concepts 2. Signal Space Concepts R.G. Gallager The signal-space viewpoint is one of the foundations of modern digital communications. Credit for popularizing this viewpoint is often given to the classic text of

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UFD. Therefore

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

Matrix Representation

Matrix Representation Matrix Representation Matrix Rep. Same basics as introduced already. Convenient method of working with vectors. Superposition Complete set of vectors can be used to express any other vector. Complete set

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

Qualification Exam: Mathematical Methods

Qualification Exam: Mathematical Methods Qualification Exam: Mathematical Methods Name:, QEID#41534189: August, 218 Qualification Exam QEID#41534189 2 1 Mathematical Methods I Problem 1. ID:MM-1-2 Solve the differential equation dy + y = sin

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? Let V be a finite-dimensional vector space. 1 It could be R n, it could be the tangent space to a manifold at a point, or it could just

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Symmetries, Fields and Particles 2013 Solutions

Symmetries, Fields and Particles 2013 Solutions Symmetries, Fields and Particles 013 Solutions Yichen Shi Easter 014 1. (a) Define the groups SU() and SO(3), and find their Lie algebras. Show that these Lie algebras, including their bracket structure,

More information

The 3 dimensional Schrödinger Equation

The 3 dimensional Schrödinger Equation Chapter 6 The 3 dimensional Schrödinger Equation 6.1 Angular Momentum To study how angular momentum is represented in quantum mechanics we start by reviewing the classical vector of orbital angular momentum

More information

Isotropic harmonic oscillator

Isotropic harmonic oscillator Isotropic harmonic oscillator 1 Isotropic harmonic oscillator The hamiltonian of the isotropic harmonic oscillator is H = h m + 1 mω r (1) = [ h d m dρ + 1 ] m ω ρ, () ρ=x,y,z a sum of three one-dimensional

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

On the quantum theory of rotating electrons

On the quantum theory of rotating electrons Zur Quantentheorie des rotierenden Elektrons Zeit. f. Phys. 8 (98) 85-867. On the quantum theory of rotating electrons By Friedrich Möglich in Berlin-Lichterfelde. (Received on April 98.) Translated by

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Introduction to Geometry

Introduction to Geometry Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional

More information

SEMISIMPLE LIE GROUPS

SEMISIMPLE LIE GROUPS SEMISIMPLE LIE GROUPS BRIAN COLLIER 1. Outiline The goal is to talk about semisimple Lie groups, mainly noncompact real semisimple Lie groups. This is a very broad subject so we will do our best to be

More information

ADDITIONAL MATHEMATICS

ADDITIONAL MATHEMATICS ADDITIONAL MATHEMATICS GCE Ordinary Level (Syllabus 4018) CONTENTS Page NOTES 1 GCE ORDINARY LEVEL ADDITIONAL MATHEMATICS 4018 2 MATHEMATICAL NOTATION 7 4018 ADDITIONAL MATHEMATICS O LEVEL (2009) NOTES

More information

Notation. For any Lie group G, we set G 0 to be the connected component of the identity.

Notation. For any Lie group G, we set G 0 to be the connected component of the identity. Notation. For any Lie group G, we set G 0 to be the connected component of the identity. Problem 1 Prove that GL(n, R) is homotopic to O(n, R). (Hint: Gram-Schmidt Orthogonalization.) Here is a sequence

More information

General-relativistic quantum theory of the electron

General-relativistic quantum theory of the electron Allgemein-relativistische Quantentheorie des Elektrons, Zeit. f. Phys. 50 (98), 336-36. General-relativistic quantum theory of the electron By H. Tetrode in Amsterdam (Received on 9 June 98) Translated

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 3th, 28 From separation of variables, we move to linear algebra Roughly speaking, this is the study of

More information

5 Irreducible representations

5 Irreducible representations Physics 29b Lecture 9 Caltech, 2/5/9 5 Irreducible representations 5.9 Irreps of the circle group and charge We have been talking mostly about finite groups. Continuous groups are different, but their

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information