Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors. 5. Covariation. 6. James orthogonality. 7. Codifference. 1 Stable random vectors Definition A random vector X = (X 1,..., X d ) is said to be a stable random vector in R d if for any positive numbers A and B, there is a positive number C and vector D R such that AX (1) + BX (2) dist = CX + D, (1) where X (1) and X (2) are independent copies of X. X is called strictly stable if and only if D = 0. It is called symmetric stable if for any Borel set S of R d, the following relation holds P(X S) = P( X S). NOTE: Any symmetric stable random variable is also strictly stable, but not vise versa. Theorem 1.1 Let X = (X 1,..., X d ) be a stable(respectively strictly stable, symmetric stable) vector in R d. Then there is a constant α (0, 2], such that in equation 1, C = (A α + B α ) 1/α. Moreover, any linear combination of the components in X of the type Y = d i=1 b ix i = (b, X) is also α- stable(respectively strictly stable, symmetric stable). Proof Use equation 1 and characteristic functions, we know that (b, AX (1) + BX (2) ) dist = (b, CX + D), that is to say, AY (1) + BY (2) dist = CY + (b, D). 1
From the univariate theory, there is a constant α (0, 2], such that C = (A α + B α ) 1/α. Moreover, this α is unique, otherwise for another α, the following equation holds for any A > 0 and B > 0 (A α + B α ) 1/α = C = (A α + B α ) 1/α. The rest is also easy to see. Similar to the one dimensional case, the previous definition is equivalent to say Definition A random variable X is stable if and only if for any n 2, there is an α (0, 2], and a vector D n such that X (1) + X (2) + + X (n) dist = n 1/α X + D n, where X (1), X (2),..., X (n) are independent copies of X. Definition A random vector X in R d is called α-stable if equation 1 holds with C = (A α + B α ) 1/α, or, equivalently, if equation 1 holds. The index α is the index of stability or the characteristic exponent of the vector x. Theorem 1.2 Let X be a random vector in R d. (a) If all linear combinations of Y = d k=1 b kx k have strictly stable distribution, then X is a strictly stable random vector. (b) If all linear combinations are symmetric stable, then X is a symmetric stable random vector. (c) If all linear combinations are stable with index of stability greater or equal to 1, then X is a stable vector. HINT: Denote Y b = d k=1 b kx k, and first show that if all Y b are stable, then they are of the same index of stability using reductio ad absurdum(proof by contradiction). And the key in this proof is comparing the tail convergence, and consider the limit behaviour of Z n = A n Y b + Y c, where α b < α c, and A n goes to zero. NOTE: There is a counterexample due to David J. Marcus which demonstrates that there exists a non-stable vector X = (X 1, X 2 ) whose linear combinations are all α-stable random variables with α < 1. Key in the proof of the counterexample is examine the characteristic functions. Theorem 1.3 Let X be a random vector in R d such that all linear combinations of its components are stable. If X is also infinitely divisible, then X is stable. 2 Characteristic functions Let X = (X 1,..., X d ) be a α-stable random vector, and let Φ α (θ) = E exp{i(θ, X)} denote its characteristic function. Also let = {s : s = 1} denote the unit sphere in R d, which is a (d 1)-dimensional surface. Theorem 2.1 Let 0 < α < 2, then X is an α-stable random vector in R d if and only if there exists a finite measure Γ on the unit sphere and a vector µ 0 in R d such that 2
(a) If α 1, (b) If α = 1, { ( Φ α (θ) = exp (θ, s) α 1 isign((θ, s)) tan πα 2 ) } Γ(ds) + i(θ, µ 0 )). (2) { Φ α (θ) = exp (θ, s) (1 + i 2 ) } π sign((θ, s)) ln (θ, s) Γ(ds) + i(θ, µ 0 )). (3) The pair (Γ, µ 0 ) is unique. NOTE: The components of µ 0, in the case of α = 1, are not equal to the shift parameters of components X 1,..., X d of X. Definition The vector X in previous theorem is said to have spectrum representation (Γ, µ 0 ). The measure Γ is called the spectrum measure of the α-stable random vector X. Example Suppose d = 1, then S 1 = { 1, 1}. If X S α (σ, β, µ) with α 1, then σ = (Γ(1) + Γ( 1)) 1/α, β = Γ(1) Γ( 1) Γ(1) + Γ( 1), µ = µ0. The skewness parameter β is zero if the spectrum measure Γ is symmetric. Similar results hold for α = 1. Example Let X be an α-stable random vector with characteristic function given in Theorem 2.1. Then linear combination Y b = (b, X) has an α-stable distribution S α (σ b, β b, µ b ). Moreover, σ b = ( ) 1/α (b, s) α Γ(ds), (4) β b = (b, s) α sign((b, s))γ(ds) (b, s) α, (5) Γ(ds) µ b = { (b, µ 0 ) if α 1, (b, µ 0 ) 2 π (b, s) ln (b, s) Γ(ds) if α = 1. (6) Proposition 2.2 The spectral measure Γ d of an α-stable vector X is concentrated on a finite number of points on the unit sphere if and only if (X 1,..., X d ) can be expressed as a linear transformation of independent α-stable random variables, say X = AY, where Y 1,..., Y 2 are independent α-stable, and A is a d d matrix. Suppose X is an α-stable random vector, then we know a corresponding spectra measure Γ is defined on the unit sphere with respect to the Euclidean norm in R d. However, there are many other norms in R d, say, which defines another unit sphere S d, therefore, we need another spectra measure on this new unit sphere. 3
Proposition 2.3 Let Γ be a finte Borel measure on equivalent to Γ with Γ (ds) = s α Γ(ds) and let T : S d be given by T s = s/ s. Define and where Γ = Γ T 1, µ 0 = { µ 0 if α 1, µ 0 + µ if α = 1, ( µ ) j = 2 π s j ln s Γ(ds), j = 1,..., d. Then the joint characteristic function Φ α (θ) of the α-stable random vector X in R d is also given by (2) and (3) with (, Γ, µ 0 ) replaced by (S d, Γ, µ 0 ) and (θ, s) = d j=1 θ js j. 3 Strictly stable and symmetric stable random vectors Theorem 3.1 X is a strictly α-stable random vector in R d with 0 < α 2 if and only if (a) α 1, (b) α = 1, µ 0 = 0; s k Γ(ds) = 0 for k = 1,..., d. As a result, we have Corollary 3.2 X is a strictly α-stable random vector in R d with 0 < α 2 if and only if all its component X k, k = 1,..., d are strictly α-stable. Theorem 3.3 X is a symmetric α-stable random vector in R d with 0 < α < 2 if and only if there exists a unique symmetric finite measure Γ on the sphere such that { } E exp{i(θ, X)} = exp (θ, s) α Γ(ds). (7) Γ is the spectral measure of the symmetric α-stable random vector X. NOTE 1: Not every strictly 1-stable random vector in R d with d > 1 can be made symmetric by shifting. NOTE 2: The symmetry of an α-stable random vector cannot be regarded as a component-wise property. For example, let X 1, X 2, X 3 be i.i.d. S 1 (1, 1, 0), and Y 1 = X 1 X 2, Y 2 = X 2 X 3, then Y is component-wise symmetric 1-stable, but the vector itself is not symmetric. However, if X 1,..., X d be jointly SαS with spectral measure Γ d, then X 1,..., X n, n d, are jointly SαS with spectral measure Γ n defined by some transformation. NOTE 3: This theorem holds also in the Gaussian case α = 2, but then Γ is no longer unique. 4
4 Sub-Gaussian random vectors Recall proposition 3.1, symmetric α-stable random variable can be constructed by multiplying a normal random variable G and a α/2-stable random variable totally skewed to the right and independent of G. The d dimensional extension can be stated as follows. Choose ( ( A S α/2 cos πα ) ) 2/α, 1, 0, with α < 2, (8) 4 so that the Laplace transform is Ee γa = exp{ γ α/α }. Let G = (G 1,..., G d ) be a zero mean Gaussian vector in R d independent of A, then the random vector X = (A 1/2 G 1,..., A 1/2 G d ) (9) has a Sαistribution, since (b, X) is SαS for all b. Definition Any vector X distributed as in equation is called sub-gaussian SαS random vector with underlining Gaussian vector G. It is also said to be subordinated to G. Proposition 4.1 The sub-gaussian symmetric α-stable random vector X has characteristic function { } d d d E exp i θ k X k = exp 1 R ij θ i θ j α/2 2, (10) k=1 i=1 j=1 where R ij = EG i G j is the covariance of the underlining Gaussian random variable G. HINT: By conditional on A, use iterative expectations, combining the Laplace transform of A, easy to show the proposition is valid. NOTE: G and X has a one-to-one correspondence. Example The characteristic function of a multivariate Cauchy distribution in R d is { } φ(θ) = exp (θ T Σθ) 1/2 + i(θ, µ 0 ). That is to say, the multivariate Cauchy distribution is a shifted SαS sub-gaussian distribution. Proposition 4.2 Let X be a SαS, α < 2 random vector in R d, then the following three statements are equivalent: (a) X is sub-gaussian with an underling Gaussian vector having i.i.d. N(0, σ 2 ) components. (b) The characteristic function of X has the form of E exp { i } ( d θ k X k = exp σ 2 2 k=1 d i=1 θ 2 i ) α/2 = exp { 2 α/2 σ α θ α}. In other words, the characteristic function only depends on the magnitude of θ. (c) The spectral measure of X is uniform on. 5
Generally speaking, the components of the underlining G is not always i.i.d., but being Gaussian, they can always be expressed as a linear combination of i.i.d. N(0, 1) random variables. Therefore Proposition 4.3 Let Z be a SαS sub Gaussian random variable in R d with underling Gaussian vector having i.i.d. N(0, 1) components. Then for any SαS sub Gaussian random variable X in R d, there is a lower-triangular d d matrix Λ such that X d = ΛZ. The matrix Λ is full rank if the components of X are linearly independent. Proposition 4.4 The spectral measure Γ of a sub-gaussian SαS random vector in R d has the form Γ = h(γ 0 ), where Γ 0 is the uniform measure on, and h is a particular mapping from onto itself. NOTE: Not all symmetric α-stable random vectors are sub-gaussian. Moreover, the components of a sub-gaussian SαS random vector are strongly dependent. 5 Covariation The covariance function is extremely powerful in studying Gaussian random vectors, however, it does not exist for α-stable random variables when α < 2. Therefore, a less powerful(but still useful) tool called covariation is defined for 1 < α < 2, and details are discussed here. Definition Let a and p be real numbers, with p. The signed power a <p> equals a <p> = a p sign(a). (11) Definition Let X 1 and X 2 be jointly SαS with α > 1 and let Γ be the spectral measure of the random vector (X 1, X 2 ). The covariation of X 1 on X 2 is the real number [X 1, X 2 ] α = s 1 s <α 1> 2 Γ(ds). (12) S 2 NOTE 1: The covariation is not symmetric in its arguments. NOTE 2: When α = 2, this definition leads to [X 1, X 2 ] 2 = 1 2 Cov(X 1, X 2 ). More intuitively, Let (X 1, X 2 ) be jointly SαS with 1 < α < 2, then Y = θ 1 X 1 + θ 2 X 2 is also SαS. Let σ(θ 1, θ 2 ) be the scale parameter of the random variable Y. Definition The covariation can also be defined as [X 1, X 2 ] α = 1 σ α (θ 1, θ 2 ) θ1 α θ. (13) 1 =0,θ 2 =1 Proof From previous examples we know that σ α (θ 1, θ 2 ) = S 2 θ 1 s 1 + θ 2 s 2 α Γ(ds). Take the derivative, and easy to show the rest. Example( Let G be mean zero Gaussian vector with covariance matrix R. For fixed 1 < α < 2, let ( ) ) A S α/2 cos πα 2/α, 4 1, 0 be independent of G. Then X = (AG 1,..., AG d ) is sub-gaussian. Notice the scale parameter σ(θ i, θ j ) of random variable Y = θ i X i + θ j X j is σ α (θ i, θ j ) = 2 α/2 (θ 2 i R ii + 2θ i θ j R ij + θ 2 j R jj) α/2, Hence the covariation is [X i, X j ] α = 2 α/2 R ij R α/2 1 jj. (14) 6
NOTE: [X i, X j ] α = [X j, X i ] α if R ii = R jj ; and [X i, X j ] α = 0 if R ij = 0. Lemma 5.1 Let X be a SαS random vector in R n with α > 1 and spectral measure Γ X, and let Y = (a, X), and Z = (b, X). Then ( )( ) <α 1> [Y, Z] α = a, s b, s ΓX (ds). S n Corollary 5.2 Let X be a SαS random vector with α > 1 and spectral measure Γ X, then [X 1, X 2 ] α = s 1 s <α 1> 2 Γ X (ds), S n and [X 1, X 1 ] α = s 1 α Γ X (ds) = σx α 1, S n where α X1 is the scale parameter of the SαS random variable X 1. Proposition 5.3 (Additivity in the first argument) Suppose (X 1, X 2, Y ) are jointly SαS, then [X 1 + X 2, Y ] α = [X 1, Y ] α + [X 2, Y ] α. Proposition 5.4 (Scaling) Suppose (XY ) are jointly SαS, and a, b are real numbers, then [ax, by ] α = ab <α 1> [X, Y ] α. NOTE 1: Although covariation is linear in its first argument, it is in general not linear in its second argument. NOTE 2: Generally speaking, covariation is not symmetric in its arguments. Proposition 5.5 If X and Y are jointly SαS and independent, then [X, Y ] α = 0. HINT: Notice that the components X is independent of Y implies that the spectral measure Γ is discrete and concentrate on the intersection of the axes and the sphere. NOTE: When 1 < α < 2, it is possible to have [X, Y ] α = 0, and X and Y are dependent. For example, the sub-gaussian random vector X, with non-degenerate G 1 independent of G 2. Proposition 5.6 Let (X, Y 1, Y 2 ) are jointly SαS, α > 1, with Y 1 and Y 2 independent. Then [X, Y 1 + Y 2 ] α = [X, Y 1 ] α + [X, Y 2 ] α. Lemma 5.7 Let (XY ) are jointly SαS with α > 1. Then for all 1 < p < α, EXY <p 1> E Y p = [X, Y ] α Y α, where Y α denotes the scale parameter of random variable Y. NOTE: Y α is also equal to [Y, Y ] 1/α α, hence it is called the covariation norm of Y S α, where S α is the linear space of jointly SαS random variables. The norm is well defined when α > 1. Proposition 5.8 α is a norm on S α. Convergence in α is equivalent to convergence in probability, and it is also equivalent to convergence in L p for all p < α. NOTE: Even when 0 < α 1, the norm does not exist, if we continue use the notation that X α = σ X, the second part of this proposition about convergence still holds. Proposition 5.9 Let (X, Y ) are jointly SαS, 1 < α < 2, then [X, Y ] α X α Y α 1 α. HINT: Hőlder inequality. 7
6 James orthogonality Recall the Gaussian case, consider all mean zero normal random variables. Their collection L 2 0 (Ω, F, P ) is a Hilbert space with inner product (X, Y ) = Cov(X, Y ) = E(XY ). Two random variables are independent if and only if they are orthogonal. However, although the norm α is well defined in S α, the covariation is not a inner product, the notion of orthogonality is not well defined. One alternative is to introduce James orthogonality(james, 1947). Definition Let E be a normed vector space. A vector x E is said to be James orthogonal to a vector y E, writing as x J y, if for any real λ, x + λy x. NOTE 1: J is not symmetric. NOTE 2: If E is Hilbert space, then J reduced to the regular. Proposition 6.1 X and Y are jointly SαS with α > 1, then [X, Y ] α = 0, Y J X. Proposition 6.2 Let 1 < α < 2 and S α be the linear space of jointly SαS random variables with dims α 3. Then the following statements are equivalent: (a) S α has the property: X, Y S α, [X, Y ] α = 0 = [Y, X] α = 0. (b) S α has the property: X, Y, Z S α, [X, Y ] α = 0, [X, Z] α = 0 = [X, Y + Z] α = 0. (c) There is a inner product on S α such that X α = (X, X) 1/2, X S α. (d) S α consists of jointly sub-gaussian SαS random variables. NOTE: When dims α = 2, (b) is in general not equivalent to either (a), (c) or (d). Proposition 6.3 Let 1 < α < 2 and S α be the linear space of jointly SαS random variables. Then the following statements are equivalent: (a) S α has the property: X, Y S α, X α = Y α, = [X, Y ] α = [Y, X] α. (b) S α consists of jointly sub-gaussian SαS random variables. Proposition 6.4 Let 1 < α < 2 and S α be the linear space of jointly SαS random variables with dims α 2. Then the following statements are equivalent: (a) S α has the property: X, Y S α, [X, Y ] α = 0, = X, Y are independent. (b) α = 2, i.e. S α consists of mean zero Gaussian random variables. 8
7 Codifference Definition The codifference of two SαS, 0 < α α, random variables X and Y equals τ X,Y = X α α + Y α α X Y α α. (15) NOTE 1: τ is symmetric. NOTE 2: When α = 2, τ reduced to the regular covariance function. Proposition 7.1 If X and Y are independent, then τ X,Y = 0. Conversely, if τ X,Y = 0, and 0 < α < 1, then X and Y are independent. HINT: If X and Y are independent, then s 1 s 2 = 0. Conversely, notice for 0 < α < 1, we have inequality s 1 s 2 α s 1 α + s 2 α. 9