Companding of Memoryless Sources. Peter W. Moo and David L. Neuho. Department of Electrical Engineering and Computer Science

Size: px
Start display at page:

Download "Companding of Memoryless Sources. Peter W. Moo and David L. Neuho. Department of Electrical Engineering and Computer Science"

Transcription

1 Optimal Compressor Functions for Multidimensional Companding of Memoryless Sources Peter W. Moo and David L. Neuho Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI fpwm, neuhoffgeecs.umich.edu Submitted to IEEE Transactions on Information Theory April 20, 998 Abstract We determine the asymptotically optimal compressor function for a multidimensional compander when the source is memoryless. Under certain technical assumptions on the compressor function, it is shown that the best k-dimensional compressor function consists of the best scalar compressor functions for each component of the source vector. The mean-squared error of optimal multidimensional companding is compared to that of optimal vector quantization and optimal scalar companding. Index terms: multidimensional companding, optimal compressor functions, lattice quantization, asymptotic quantization theory, memoryless sources. This work was supported by an NSF Graduate Fellowship and by NSF Grant NCR Part of this work was presented at the 997 IEEE International Symposium on Information Theory in Ulm.

2 I Introduction Multidimensional companding is a type of structured vector quantization (VQ) with low complexity. A compander maps a k-dimensional source vector X to a k-dimensional vector Y in some rectangular support region, using a continuous compressor function f. The transformed source vector Y is quantized using a lattice quantizer to ^Y, which is then mapped by f? to ^X, which is the reproduction of X. In this work, only xed-rate quantization (i.e. no entropy coding) is considered. The lattice quantizer has as its codevectors, all points from some innite lattice that are contained in the rectangle. When given a compressor function and a desired number of codevectors N, we assume that the lattice is maximally scaled so that the number of lattice points in the corresponding support rectangle is at most N. We also assume that the partition of the lattice quantizer is generated by tesselating some fundamental cell T 0 at points of the lattice. (We will not be concerned with the manner in which points outside the support rectangle are assigned to cells.) The resulting compander is a VQ whose codevectors and partition are those of the lattice quantizer transformed by f?. In pioneering work, Bennett [] argued that any scalar quantizer (e.g. optimal) can be implemented by companding and heuristically derived an asymptotic expression for the mean squared error (MSE) of scalar companding. Specically, when N is large, companding MSE is given by D(N) = 2N 2 Z p(x) f 0 (x) 2 dx where p(x) is the probability density of the source and f 0 (x) is the derivative of the compressor function. From Panter and Dite [2] one concludes that the compressor function f that minimizes MSE, Z x f (x) = c? p(y) =3 dy where c makes f (x) integrate to one. Using the optimal compressor in Bennett's formula shows that asymptotically optimal scalar quantizers have MSE given by where kpk = ( R p(x) dx) =. D (N) = 2N 2 kpk =3

3 For k-dimensional companding, Bucklew [2] extended Bennett's scalar result to show that when N is large and the fundamental cell T 0 components of a random vector uniformly distributed over T 0 MSE D(f; N) is approximately given by D(f; N) = D B (f; N), where and D B (f; N) = is white in the sense that that the are white, the compander N 2=k M(T 0) G(f) () G(f) = v(f(< k )) 2=k Z k kf? (x)k 2 p(x) dx; (2) v denotes k-dimensional volume, f(< k ) is the image of the compressor function f, T 0 is the fundamental cell of the lattice [4], F (x) is the derivative matrix of the compressor function f(x), F? (x) is the inverse of the matrix F (x), and kk is the l 2 matrix norm. 2 M(T 0 ) is the normalized moment of inertia (NMI) of T 0, given by M(T 0 ) = k? v(t 0 )?(+2=k) R T0 kxk2 dx. Bucklew also showed that an optimal two-dimensional vector quantizer for a source with a circularly symmetric density cannot be implemented using a companding structure and a dierentiable compressor function. See also Gersho [5]. Subsequently, Bucklew [3] showed that asymptotically optimal vector quantizers, for vector dimension three and greater, cannot be implemented using a companding structure, except for a very restricted class of source densities. An important open question remains: What is the best multidimensional compressor function for an arbitrary source? This is a dicult unsolved problem. As a rst step, in this paper we nd the asymptotically optimal compressor function for a memoryless source, under certain technical conditions on the compressor function. The result agrees with intuition and shows that the best k-dimensional compressor function consists of the best scalar compressor functions for each component of the source vector. Though we suspect that this result holds over a broader class of compressors, we were not able to show this. We also compare the MSE of optimal multidimensional companding to that of optimal VQ and optimal scalar companding. For example, we show that companding suers the same cell shape, point density and oblongitis losses as scalar quantization, but recovers the cubic loss. The results in this paper can be easily generalized to pth power distortion measures. In [2], G(f) did not contain v(f(< k )) 2=k, because it was assumed that f(< k ) = (0; ) k. 2 That is, the sum of squares of the matrix elements. 2

4 An outline of the paper is the following. The main result is stated and discussed in Section II. Optimal companding is compared to optimal VQ and scalar quantization in Section III. A derivation of the main result is presented in Section IV, with some details given in the Appendix. II Main Result Let the source vector be an independent random vector X with probability density p(x) = ky p i (x i ); p i continuous, i = ; :::k. (3) We consider a multidimensional compander that consists of a compressor function f, a lattice quantizer constructed from a lattice, and the inverse function f?. The lattice is assumed to have a fundamental cell T 0 that is white. 3 The support region of the lattice quantizer is a rectangle of the form R(a; b) := (a ; b ) (a k ; b k ) for some a; b 2 < k. Without loss of generality, we assume a i < b i, for all i. Denition For a; b 2 < k, a function f : < k! < k is boundary limited to R(a; b) if for each i = ; :::; k, f i (x)! a i when x i!? for any xed x ; :::; x i? ; x i+ ; :::; x k, and f i (x)! b i when x i! + for any xed x ; :::; x i? ; x i+ ; :::; x k. In this paper, we only consider compressor functions f in a class C, as dened below. Denition 2 The class C is the set of all mappings f : < k! < k that are onto and boundary limited to some rectangle R(a; b), and whose derivative matrix F (x) is continuous in x and positive denite 4 for all x. We are interested in nding the compressor function that minimizes G(f) over the class C and the resulting minimum MSE; that is, f = arg min G(f) (4) f2c D B (N) = D B(f ; N) The following is the main result of this paper. 3 In any dimension, the integer lattice and the optimal lattice are white [5]. 4 A positive denite matrix with real elements is symmetric (c.f. [9]). 3

5 Proposition Let the source be memoryless, as given by (3). Then over the class C of compressor functions, there exists a minimum of G(f), and a compressor function achieves the minimum if and only if it is a memoryless compressor of the form f (x) = (f (x); f 2 (x); ; f k (x))t, where f i (x) = c kp i k =6 =3 Z xi? p i (y) =3 dy + a i ; i = ; :::; k; for some c 2 < and a 2 < k. The image of the resulting compressor function is R(a; b ) where b i = a i + c kp i k =2 ; i = ; :::; k: The resulting MSE is, asymptotically, =3 D B(N) = N 2=k M(T 0) 0 Y k where T 0 is the white fundamental cell of the lattice. j= The proof of this proposition is given in Section IV. Remarks: =k kp j k =3 A. We do not assume that the compressor functions in C are one-to-one. However, Proposition demonstrates that the optimal compressor functions are one-to-one. 2. The optimal compressor function operates independently on each component of the source vector with the optimal compressor function for scalar companding. 3. We have restricted attention to compressor functions with a rectangular image. It is possible that a compressor function with a non-rectangular image could yield lower MSE than f. However, the resulting optimal image might have a very complex shape, making the implementation of the lattice quantizer very complex. This would defeat the motivating idea behind companding, namely that it have low complexity. 4. If the functions kp i k =3 are equal, then the optimal image is a cube. If not, then the optimal image is a rectangle, which induces a kind of rate allocation among the components of X, with X i receiving bits in proportion to log 2 b i. This is easiest to see when = Z k, the k-dimensional integer lattice; for in this case, companding is simply product quantization, and optimal compander implements an optimal product quantizer, which as is well known, assigns Q kpik=3 k j= kpjk N levels to X i. But it also holds =3 4 (5)

6 for any other white lattice, as does the fact that the resulting distortions are the same for each component, which is well known for optimal product quantizers. 5. If f(< k ) were restricted to be a cube, then one may see from the proof of Proposition that the optimal compressor function e f would have component functions ef i (x) := c kp i k =3 =3 Z xi? p i (y) =3 dy + a i ; where c 2 < and a 2 < k. In this case, the loss in MSE of companding to a cube instead of to the optimal rectangle is given by D B ( e f; N) D B (f ; N) = k P k kp j k =3 Qk j= kp jk =3 =k where the inequality follows from the arithmetic-geometric mean inequality. III Comparison to Optimal Vector Quantization It is interesting to compare the MSE of an arbitrary compander, given by (), to the asymptotic form of k(n), the MSE of optimal vector quantization. Zador [4] and Gersho [5] showed that for large N, k(n) = m k N 2=k kpk k=(k+2); under the assumption that optimal VQ has cells that are congruent to the tesselating polytope with least NMI, m k, in k dimensions. In order to facilitate this comparison, we provide a point density, cell-shape analysis in the style of []. Recall from [] that Bennett's Integral gives the MSE of a VQ with many points in terms of two key characteristics, the point density (x) and inertial prole m(x). Specically, Bennett's Integral is given by Z m(x) B(; m) = p(x) N 2=k dx (x) 2=k A VQ that achieves k(n) has optimal point density 5 (x) = c p(x) k=(k+2) and optimal inertial prole m (x) = m k. Let the point density and inertial prole of companding be 5 Constants c in this section make their respective functions integrate to one. 5

7 denoted by C (x) and m C (x), respectively. Then the loss of an arbitrary multidimensional compander relative to optimal VQ is L comp = D B(f; N) k(n) = B( C; m C ) B( ; m ) = B( C; m C ) B( C ; m ) B( C ; m ) B( {z } ; m ) {z } L ce L pt where L ce is the loss due to suboptimal inertial prole (or cell shape) and L pt is the loss due to suboptimal point density. It is easily shown that the point density and inertial prole of companding are given by C (x) = jdet F (x)j ; m C (x) = M(To ) k kf? (x)k 2 (det F (x)) 2=k M(T o ): (6) where the approximate equality in (6) assumes, as usual, that the fundamental cell of the lattice is white, and where the inequality, which is proved in Appendix A, holds with equality if and only if F (x) is a scaling of an orthogonal matrix. Eq. (6) reects the fact that the the cells of the compander are stretched versions of T 0, except where and only where F (x) is a scaling of an orthogonal transformation. We can isolate the eect of this stretching, or oblongation, by decomposing the cell shape loss into L ce = L ob L ce;t0, where L ob is the oblongitis loss [], and L ce;t0 = M(T0) is the cell shape loss of the lattice, which m k is independent of the compressor function f. Therefore, the loss of arbitrary companding relative to optimal VQ is expressed as L comp = L pt L ob L ce;t0. Note that companding achieves L ce;t0 = if and only if it uses a lattice such that T 0 has NMI equal to m k. One would like to choose the compressor f to minimize L pt L ob. On one hand, companding can achieve the optimal point density if the component functions of the compressor function are given by Z xi f i (x) = c? p i (y) k=k+2 dy (7) for i = ; :::; k. In this case L pt =, but it can be shown that this causes Lob =. In other words, when the compander operates with a compressor function specied by (7), MSE does not decrease as N?2=k. On the other hand, a compander achieves L ob = if for all x, k? kf? (x)k 2 (det F (x)) 2=k =, which is approximately true if and only if the compressor function is a scaling of an orthogonal transformation on some large probability set. 6 In this 6 See Appendix A. 6

8 k L ce;cube (db) L ce (db) L pt (db) L comp (db) L prod (db) Table : Losses of optimal companding, L comp, and optimal product VQ, L prod, for an IID Gaussian source. case L pt =. As a result, we see that companding cannot simultaneously achieve L pt = and L ob = and therefore cannot achieve the performance of optimal VQ, which agrees with Bucklew's analysis [2]. In fact the best compressor f, presented in Proposition, is a compromise which yields a compander with L pt > and L ob >. It is immediately seen that companding makes the same tradeo as optimal product VQ [] (it generates the same point density), and therefore has the same L pt and L ob. Companding's sole advantage over product VQ is its ability to choose the lattice that achieves L ce;t0 =, whereas product VQ must incur cell shape loss due to cubic cells, that is L ce;cube >. Therefore, optimal companding suers the same point density and oblongitis losses as optimal product VQ while recovering the cubic loss. Table summarizes the losses of optimal companding and optimal product VQ, relative to optimal VQ, for an IID Gaussian source. Note that L comp = L pt + L ob, which is the shape loss 7 of optimal scalar quantization relative to optimal VQ, dened by Lookabaugh and Gray [0]. On the other hand, optimal product VQ has loss L prod = L pt +L ob +L ce;cube, which shows that companding achieves the space-lling advantage of vector quantization over scalar [0], or equivalently, the inverse of the cubic loss of optimal product quantization relative to optimal vector quantization []. 7 Actually, [0] dene the inverse of this to be the shape advantage of vector quantization over scalar quantization. 7

9 IV Derivation of Main Result In this section we nd the minimum of the function G(f) over C, as dened in Section II. Our basic approach is to use variational calculus arguments to show that f is the optimal compressor function. It is a well known fact that a stationary point of a convex functional on a convex set is a minimizing function. Upon initial inspection of the companding problem, it is evident that G(f) is not a convex functional and that C is not a convex set (due to the onto assumption), so that we cannot apply the above fact directly. However, we will consider certain subclasses of C and decompose G(f) into the product of two functionals, one of which is strictly convex on each subclass and the other of which is constant on each subclass. Though these subclasses are not convex, it will nevertheless be possible to minimize G(f) over them using the above fact. We then perform an auxiliary minimization that leads to the main result. We have dened C so that we can use the above approach for optimization. Continuity of the derivatives and the boundary limited assumptions are needed to show that a function is a stationary point. The assumption that f be onto R(a; b) is needed to establish the aforementioned product decomposition of G(f). Finally, we require that the derivative matrix of f be positive denite to ensure the convexity of H(f) and so that the appropriately chosen subclass of C is convex. As an aside, we note that in the scalar case, Gish and Pierce [7] derived the optimal compressor function using variational techniques, and subsequently Gray and Gray [8] presented a dierent proof using Holder's Inequality. However, we have not been able to develop a proof using Holder's Inequality for the multidimensional case. To begin the derivation of the main result, we dene the subclasses of C as follows. Denition 3 For a; b 2 < k, the class C ab is the subset of C such that f is onto and boundary limited to R(a; b). We wish to minimize G(f) over C and let G denote the resulting minimum value. Then where G = min G(f) = min ff:f2cg fa;bg G ab ; G ab = min ff:f2c abg G(f): 8

10 It will be shown that the minima G and G ab exist. We decompose G(f) into G(f) = V (f) H(f); where V (f) = v(f(< k )) 2=k = H(f) = Z h(f; F; x) dx Z 2=k dy! = f(< k ) Z < k det F (x) dx 2=k h(f; F; x) = k kf? (x)k 2 p(x): To nd G, we rst nd G ab for arbitrary a; b and then perform the auxiliary minimization of G ab over the choice of a; b. Since V (f) = Q k (b i? a i ) 2=k, for all f 2 C ab, we have G ab = ky (b i? a i ) 2=k min H(f); f2c ab and the minimum will be shown to exist. Therefore, it remains to minimize H(f) on C ab. Though C ab is not convex (due to the onto assumption), relaxing the onto assumption leads to a larger class e Cab that is convex, and fortunately, such that the compressor that minimizes H(f) over this larger class is in C ab. Denition 4 For a; b 2 < k, the class e Cab is the set of all mappings f : < k! < k that are boundary-limited to R(a; b) and whose derivative matrix F (x) is continuous in x and contained in F for all x. We see immediately that C ab e Cab. And it is easy to check that e Cab is convex. Lemma C- shows that H(f) is strictly convex on e Cab and Lemma C-2 shows that f ab := [f ab; f ab;k] T, where f ab;i(x) := b i? a i kp i k =3 =3 Z xi? p i (y) =3 dy + a i ; is a stationary point of H(f) on e Cab. (The denitions of convexity and stationary point are given in Appendix B.) Since H(f) is strictly convex, we see that f ab uniquely minimizes H(f) on e Cab. Moreover, since f ab 2 C ab and C ab e Cab, we see that f ab uniquely minimizes 9

11 H(f) on C ab, as well. It follows immediately that f ab also uniquely minimizes G(f) on C ab. Substituting fab into G(f) yields G ab = ky (b i? a i ) 2=k k kp i k =3 (b i? a i ) 2 To minimize G ab over the choice of a; b, we apply the arithmetic-geometric mean inequality. Specically, for any a; b, bg(a; b) = = ky (b i? a i ) 2=k k 0 Y k j= kp j k =3 A with equality if and only if =k kp i k =3 (b i? a ) 2 k Y (b i? a i ) 2=k ky! =k kp i k =3 (b i? a ) 2 kp ik=3 (b i?a) 2 is not the same for all i, or equivalently if there exists a constant c such that b i? a i = c kp i k =2 ; i = ; :::; k: It follows then that =3 G = 0 Y k j= =k kp j k =3 A ; and that the compressor functions that achieve G have the form f (x) = [f (x)f 2 (x) f k (x)] T where Z fi xi (x) = c kp i k =6 p =3 i (y) =3 dy + a i ; i = ; :::; k;? where c 2 < and a 2 < k are arbitrary. The image and MSE formulas stated in the Proposition follow directly. Appendices A Inertial Prole Inequality In this appendix, we prove the lower bound on the inertial prole of companding given in (6). Expressing the inertial prole as a function of the eigenvalues (x); 2 (x); :::; k (x) of F (x), we have m(x) = M(To ) k? kf? (x)k 2 (det F (x)) 2=k = M(T o ) k? M(T o ) ky! k! Y i (x) 2=k i (x) 2=k = M(T o ); 0 2 i (x)! ky i (x) 2=k!

12 where it is easily seen that kf? (x)k 2 = tr[f? (x) T F? (x)] = P k?2 i (x), and where the inequality follows from the arithmetic-geometric mean inequality. We have equality if and only if all i (x) 2 are equal for all x, which occurs when and only when F (x) is a scaling of an orthogonal transformation. B Denitions This appendix introduces certain denitions and facts that will be needed in the sequel. Let G denote a linear space of continuous functions, and let F denote a convex subset of G; that is, f + (? )f 2 2 F whenever f ; f 2 2 F and 0. Let J be a functional dened on F. For any f ; f 2 2 F, the rst variation of J at f in the direction towards f 2 is given by J(f ; f 2? f ) := J(f + (f 2? f ))j =0 ; assuming that the derivative exists at = 0. From now on, we assume that J(f ; f 2? f ) is well dened for all f ; f 2 2 F. A function f 0 2 F is said to be a stationary point of J on F if J(f 0 ; f? f 0 ) = 0; for all f 2 F If J has a minimum point on F, then it is either a stationary point or a point in the boundary of F. The functional J is said to be convex on the convex set F if J(f + (? )f 2 ) J(f ) + (? )J(f 2 ) whenever f ; f 2 2 F and 0. The functional is strictly convex if the equality is strict whenever f 6= f 2. Equivalently [3, Def. 3.], J is convex if J(f 2 )? J(f ) J(f ; f 2? f ); for all f ; f 2 2 F: and strictly convex if the inequality is strict whenever f 2 6= f. If f is a stationary point of a convex functional J, then f minimizes J. Furthermore, f is the unique minimizer of J on F if J is strictly convex. C Key Lemmas This appendix proves the two key lemmas needed in the proof of Proposition in Section IV. Specically, we will show that H(f) is strictly convex on e Cab and that f ab is a stationary

13 point of H(F ) on e Cab. We will need to use the denitions and facts of the previous appendix. Throughout, we assume C is the class dened in Section II. One may straightforwardly check that H(f ; f 2? f ) is well dened for all f ; f 2 2 C. Lemma C- The function H(f) is strictly convex on e Cab. Proof: By the condition for convexity given in the previous appendix it suces to show that for all f ; f 2 2 e Cab, the function H(f) satises H(f 2 )? H(f ) H(f ; f 2? f ) (C-) with equality if and only if f (x) = f 2 (x) for all x. Let F (x) and F 2 (x) be the derivative matrices of f and f 2. By the denitions of H and H, the above is equivalent to Z?kF? 2 (x)k 2? kf? (x)k 2 Z p(x) dx k (F (x) + (F 2 (x)? F (x)))? k 2 p(x) dx : =0 Lemma C-3, given later, shows that the function ka? k 2 is convex on the space of positive denite matrices A. By the denition of C, it follows from such convexity that kf? 2 (x)k 2? kf? (x)k 2 k (F (x) + (F 2 (x)? F (x)))? k 2 =0 ; for all x, with equality for some x if and only if F (x) = F 2 (x). It now follows directly from this that (C-) holds, with equality if and only if F (x) = F 2 (x) for all x, which in turn is equivalent to f (x) = f 2 (x) for all x, because compressors in e Cab are continuous and boundary limited to R(a; b). (Since the f's (and F 's) are continuous, if they are equal almost everywhere, they are equal everywhere.) This completes the proof of the lemma. 2. Lemma C-2 f ab is a stationary point of H(f) on e Cab. Proof: It is easily veried that f ab 2 e Cab. To show that f ab is a stationary point of H(f) on e Cab, we will show that H(f ab; f? f ab) = 0; for all f 2 e Cab : (C-2) Fix f 2 C ab and dene g = f? f ab and let G be the derivative matrix of g. Using a Taylor series expansion, H(f ab + g)j =0 = Z h(f ab + g; F ab + G; x)j =0 2 dx

14 = = Z j= 0 h(f ab ; F ab ; x) + Z j= F ij h(f ab ; F ab ; x) G ij(x) dx h(fab F ; F ab ; x) G ij(x) + o() ij : A =0 dx In order to show (C-2), we will show that for i = ; :::; k and j = ; :::; k, Z F ij h(f ab ; F ab ; x) G ij(x)dx = 0 (C-3) for all f such that f 2 e Cab, where g = f? f ab. First consider (C-3) for i 6= j. From the denition of h and the fact that F is diagonal, the derivative of h reduces to F ij h(f ab ; F ab ; x) = k p(x) F ij k(f ab )? (x)k 2 = 0; for all x: Next consider (C-3) for i = j. Using the formula for f in Proposition, we nd h(fab F ; F ab ; x) = ii k p(x) k(fab F )? (x)k 2 =? 2 ii k c 3 i ky m= m6=i p m (x m ): Therefore, because Z h(fab F ; F ab ; x) G ii(x) dx ii Z Z =? 2 ky p?? k c 3 m (x m ) i Z? m= m6=i Z? G ii (x) dx i ky m= m6=i G ii (x) dx i = lim xi! g i(x)? lim xi!? g i(x) = 0 dx m = 0 for all x ; :::; x i? ; x i+ ; :::; x k, by the fundamental theorem of calculus and the fact that since g is the dierence between two functions that are boundary limited to R ab, g i (x)! 0 when x i!? for any xed x ; :::; x i? ; x i+ ; :::; x k, and g i (x)! 0 when x i! + for any xed x ; :::; x i? ; x i+ ; :::; x k. Note that this argument relies on the continuity of G ii ; therefore, it is necessary that the diagonal entries of Fab be continuous. By assumption, the densities p i are continuous, which implies that Fab;ii = c p i (x i ) =3 ; i = ; ::; k are continuous. It follows that (C-3) holds for all i; j, which implies that fab satises (C-2), and completes the proof of the lemma. 2 3

15 Lemma C-3 The function ka? k 2 is strictly convex on the set of positive denite matrices. Proof: Note that ka? k 2 = tr [(A? ) T A? ]. We will show that for A; B positive denite, tr (B? ) T B?? tr (A? ) T A? tr f(a + (B? A))? g T (A + (B? A))? =0 ; (C-4) with equality if and only if A = B. Because they are both positive denite, A and B can be written as A = CIC T and B = CDC T, where D is a diagonal matrix with positive diagonal elements d ; d 22 ; :::; d kk [9, Cor ]. Then the left-hand-side of (C-4) can be written as tr (B? ) T B?? tr (A? ) T A? = tr h i (C? ) T C? (C? ) T C? T [(D? ) 2? I] (C-5) since DM = MD for any matrix M and any diagonal matrix D. Denoting the ij element of (C? ) T C? by c ij, we see that (C-5) is given by tr h i (C? ) T C? (C? ) T C? T [(D? ) 2? I] = s i? d 2 ii (C-6) where s i is the sum of column i of the matrix (C? ) T C?. In order to simplify the righthand-side of (C-4), we note that tr f(a + (B? A))? g T (A + (B? A))? = tr h (C? ) T C? (C? ) T C? T ([(? )I + D]? ) 2 i = s i [ + (d ii? )] 2 (C-7) where (C-7) is derived using an argument similar to that in (C-6). Now the right-hand-side of (C-4) is given by tr f(a + (B? A))? g T (A + (B? A))? =0 = s i =?2 [ + (d ii? )] =0 2 Therefore, (C-4) holds if and only if, s i? d 2 ii?2 s i (d ii? ) s i (d ii? ) (C-8) where s i > 0. Since D and I are diagonal matrices with positive diagonal elements, it is sucient to show that f(x) = P k s i x?2 i is strictly convex on fx : x i > 0g, which can be easily shown. Therefore (C-8) holds, with equality if and only if d ii = for all i, or equivalently when D = I and therefore A = B. 2 4

16 References [] W.R. Bennett, \Spectra of quantized signals," Bell Syst. Tech. J., vol. 27, pp , July 948. [2] J.A. Bucklew, \Companding and random quantization in several dimensions," IEEE Trans. Inform. Theory, vol. IT-27, pp.207-2, Mar. 98. [3] J.A. Bucklew, \A note on optimal multidimensional companders," IEEE Trans. Inform. Theory, vol. IT-29, p. 279, Mar [4] J.H. Conway and N.J.A. Sloane, Sphere Packings, Lattices, and Groups, 2nd ed. New York: Springer-Verlag, 993. [5] A. Gersho, \Asymptotically optimal block quantization," IEEE Trans. Inform. Theory, vol. IT-25, pp , July 979. [6] A. Gersho and R. Gray, Vector Quantization and Signal Compression. New York: Springer, 99. [7] H. Gish and J.N. Pierce, \Asymptotically ecient quantizing," IEEE Trans. Inform. Theory, vol. IT-4, pp , Sept [8] R.M. Gray and A.H. Gray, Jr., \Asymptotically optimal quantizers," IEEE Trans. Inform. Theory, vol. IT-23, pp , Jan [9] R.A. Horn and C.R. Johnson, Matrix Analysis. Melbourne: Cambridge Press, 985. [0] T.D. Lookabaugh and R.M. Gray, \High-resolution quantization theory and the vector quantizer advantage," IEEE Trans. Inform. Theory, vol. 35, pp , Sept [] S. Na and D.L. Neuho, \Bennett's integral for vector quantizers," IEEE Trans. Inform. Theory, vol. IT-4, pp , July 995. [2] P. Panter and W. Dite, \Quantization in pulse-count modulation with nonuniform spacing of levels," Proc. IRE, vol. 39, pp , Jan. 95. [3] J.L. Troutman, Variational Calculus and Optimal Control: Optimization with Elementary Convexity, 2nd Ed. New York: Springer-Verlag, 996. [4] P.L. Zador, \Asymptotic quantization error of continuous signals and the quantization dimension," IEEE Trans. Inform. Theory, vol. IT-28, pp , Mar [5] R. Zamir and M. Feder, \On lattice quantization noise," IEEE Trans. Inform. Theory, vol. 42, pp , July

One special case of interest is when we have a periodic partition of R k, i.e., a partition composed of translates of a fundamental unit consisting of

One special case of interest is when we have a periodic partition of R k, i.e., a partition composed of translates of a fundamental unit consisting of On the Potential Optimality of the Weaire-Phelan Partition avin Kashyap David. euho Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, MI 4809-222 USA fnkashyap,neuhoffg@eecs.umich.edu

More information

I. Introduction Consider a source uniformly distributed over a large ball B in R k, centered at the origin. Suppose that this source is quantized usin

I. Introduction Consider a source uniformly distributed over a large ball B in R k, centered at the origin. Suppose that this source is quantized usin Accepted for publication in IEEE Trans. Inform. Thy., Mar. 2, 200 On Quantization with the Weaire-Phelan Partition * Navin Kashyap and David L. Neuho Department of Electrical Engineering and Computer Science

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

distortion and, usually, "many" cells, and "large" rate. Later we'll see Question: What "gross" characteristics" distinguish different highresolution

distortion and, usually, many cells, and large rate. Later we'll see Question: What gross characteristics distinguish different highresolution High-Resolution Analysis of Quantizer Distortion For fied-rate, memoryless VQ, there are two principal results of high-resolution analysis: Bennett's Integral A formula for the mean-squared error distortion

More information

PROOF OF ZADOR-GERSHO THEOREM

PROOF OF ZADOR-GERSHO THEOREM ZADOR-GERSHO THEOREM FOR VARIABLE-RATE VQ For a stationary source and large R, the least distortion of k-dim'l VQ with nth-order entropy coding and rate R or less is δ(k,n,r) m k * σ 2 η kn 2-2R = Z(k,n,R)

More information

Afundamental component in the design and analysis of

Afundamental component in the design and analysis of IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 533 High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate-Distortion Function Tamás Linder, Member, IEEE, Ram

More information

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems The Evil of Supereciency P. Stoica B. Ottersten To appear as a Fast Communication in Signal Processing IR-S3-SB-9633 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems Signal Processing

More information

aimed at minimizing perceptually meaningful distortion measures are studied [, 4]. Although perceptual distortion measures result in higher complexity

aimed at minimizing perceptually meaningful distortion measures are studied [, 4]. Although perceptual distortion measures result in higher complexity Asymptotic Performance of Vector Quantizers with a Perceptual Distortion Measure Jia Li Navin Chaddha Robert M. Gray Abstract Gersho's bounds on the asymptotic performance of vector quantizers are valid

More information

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou Some 2 2 Unitary Space-Time Codes from Sphere Packing Theory with Optimal Diversity Product of Code Size 6 Haiquan Wang Genyuan Wang Xiang-Gen Xia Abstract In this correspondence, we propose some new designs

More information

1 Introduction One of the central problems in lossy data compression concerns designing vector quantizers from training data. The following question i

1 Introduction One of the central problems in lossy data compression concerns designing vector quantizers from training data. The following question i On the Performance of Vector Quantizers Empirically Designed From Dependent Sources Assaf J. Zeevi Information Systems Lab Stanford University Stanford, CA. 94305-9510 Abstract Suppose we are given n real

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization

More information

On the optimal block size for block-based, motion-compensated video coders. Sharp Labs of America, 5750 NW Pacic Rim Blvd, David L.

On the optimal block size for block-based, motion-compensated video coders. Sharp Labs of America, 5750 NW Pacic Rim Blvd, David L. On the optimal block size for block-based, motion-compensated video coders Jordi Ribas-Corbera Sharp Labs of America, 5750 NW Pacic Rim Blvd, Camas, WA 98607, USA. E-mail: jordi@sharplabs.com David L.

More information

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal Performance Bounds for Joint Source-Channel Coding of Uniform Memoryless Sources Using a Binary ecomposition Seyed Bahram ZAHIR AZAMI*, Olivier RIOUL* and Pierre UHAMEL** epartements *Communications et

More information

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2 High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity

More information

Covering an ellipsoid with equal balls

Covering an ellipsoid with equal balls Journal of Combinatorial Theory, Series A 113 (2006) 1667 1676 www.elsevier.com/locate/jcta Covering an ellipsoid with equal balls Ilya Dumer College of Engineering, University of California, Riverside,

More information

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion.

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion. The Uniformity Principle A New Tool for Probabilistic Robustness Analysis B. R. Barmish and C. M. Lagoa Department of Electrical and Computer Engineering University of Wisconsin-Madison, Madison, WI 53706

More information

The best expert versus the smartest algorithm

The best expert versus the smartest algorithm Theoretical Computer Science 34 004 361 380 www.elsevier.com/locate/tcs The best expert versus the smartest algorithm Peter Chen a, Guoli Ding b; a Department of Computer Science, Louisiana State University,

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

GAUSSIAN PROCESS TRANSFORMS

GAUSSIAN PROCESS TRANSFORMS GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)

More information

. Consider the linear system dx= =! = " a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z

. Consider the linear system dx= =! =  a b # x y! : (a) For what values of a and b do solutions oscillate (i.e., do both x(t) and y(t) pass through z Preliminary Exam { 1999 Morning Part Instructions: No calculators or crib sheets are allowed. Do as many problems as you can. Justify your answers as much as you can but very briey. 1. For positive real

More information

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &

More information

Gaussian Source Coding With Spherical Codes

Gaussian Source Coding With Spherical Codes 2980 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 11, NOVEMBER 2002 Gaussian Source Coding With Spherical Codes Jon Hamkins, Member, IEEE, and Kenneth Zeger, Fellow, IEEE Abstract A fixed-rate shape

More information

Randomized Quantization and Optimal Design with a Marginal Constraint

Randomized Quantization and Optimal Design with a Marginal Constraint Randomized Quantization and Optimal Design with a Marginal Constraint Naci Saldi, Tamás Linder, Serdar Yüksel Department of Mathematics and Statistics, Queen s University, Kingston, ON, Canada Email: {nsaldi,linder,yuksel}@mast.queensu.ca

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

The Erwin Schrodinger International Pasteurgasse 6/7. Institute for Mathematical Physics A-1090 Wien, Austria

The Erwin Schrodinger International Pasteurgasse 6/7. Institute for Mathematical Physics A-1090 Wien, Austria ESI The Erwin Schrodinger International Pasteurgasse 6/7 Institute for Mathematical Physics A-1090 Wien, Austria On the Point Spectrum of Dirence Schrodinger Operators Vladimir Buslaev Alexander Fedotov

More information

Circuit depth relative to a random oracle. Peter Bro Miltersen. Aarhus University, Computer Science Department

Circuit depth relative to a random oracle. Peter Bro Miltersen. Aarhus University, Computer Science Department Circuit depth relative to a random oracle Peter Bro Miltersen Aarhus University, Computer Science Department Ny Munkegade, DK 8000 Aarhus C, Denmark. pbmiltersen@daimi.aau.dk Keywords: Computational complexity,

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Multiple Antennas in Wireless Communications

Multiple Antennas in Wireless Communications Multiple Antennas in Wireless Communications Luca Sanguinetti Department of Information Engineering Pisa University luca.sanguinetti@iet.unipi.it April, 2009 Luca Sanguinetti (IET) MIMO April, 2009 1 /

More information

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory Hacettepe Journal of Mathematics and Statistics Volume 46 (4) (2017), 613 620 Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory Chris Lennard and Veysel Nezir

More information

2 FRED J. HICKERNELL the sample mean of the y (i) : (2) ^ 1 N The mean square error of this estimate may be written as a sum of two parts, a bias term

2 FRED J. HICKERNELL the sample mean of the y (i) : (2) ^ 1 N The mean square error of this estimate may be written as a sum of two parts, a bias term GOODNESS OF FIT STATISTICS, DISCREPANCIES AND ROBUST DESIGNS FRED J. HICKERNELL Abstract. The Cramer{Von Mises goodness-of-t statistic, also known as the L 2 -star discrepancy, is the optimality criterion

More information

Example: for source

Example: for source Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby

More information

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q),

Elementary 2-Group Character Codes. Abstract. In this correspondence we describe a class of codes over GF (q), Elementary 2-Group Character Codes Cunsheng Ding 1, David Kohel 2, and San Ling Abstract In this correspondence we describe a class of codes over GF (q), where q is a power of an odd prime. These codes

More information

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Husheng Li 1 and Huaiyu Dai 2 1 Department of Electrical Engineering and Computer

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Midterm 1. Every element of the set of functions is continuous

Midterm 1. Every element of the set of functions is continuous Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Matrix Energy. 1 Graph Energy. Christi DiStefano Gary Davis CSUMS University of Massachusetts at Dartmouth. December 16,

Matrix Energy. 1 Graph Energy. Christi DiStefano Gary Davis CSUMS University of Massachusetts at Dartmouth. December 16, Matrix Energy Christi DiStefano Gary Davis CSUMS University of Massachusetts at Dartmouth December 16, 2009 Abstract We extend Ivan Gutmans idea of graph energy, stemming from theoretical chemistry via

More information

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday

More information

Fat Struts: Constructions and a Bound

Fat Struts: Constructions and a Bound Fat Struts: Constructions and a Bound N. J. A. Sloane, Vinay A. Vaishampayan, Sueli I. R. Costa AT&T Shannon Labs, Florham Park, NJ 0793, USA. Email: {njas,vinay}@research.att.com University of Campinas,

More information

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley.

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley. I 1947 Center St. Suite 600 Berkeley, California 94704-1198 (510) 643-9153 FAX (510) 643-7684 INTERNATIONAL COMPUTER SCIENCE INSTITUTE Structural Grobner Basis Detection Bernd Sturmfels and Markus Wiegelmann

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Fixed Term Employment Contracts. in an Equilibrium Search Model

Fixed Term Employment Contracts. in an Equilibrium Search Model Supplemental material for: Fixed Term Employment Contracts in an Equilibrium Search Model Fernando Alvarez University of Chicago and NBER Marcelo Veracierto Federal Reserve Bank of Chicago This document

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

g(.) 1/ N 1/ N Decision Decision Device u u u u CP

g(.) 1/ N 1/ N Decision Decision Device u u u u CP Distributed Weak Signal Detection and Asymptotic Relative Eciency in Dependent Noise Hakan Delic Signal and Image Processing Laboratory (BUSI) Department of Electrical and Electronics Engineering Bogazici

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Vector Channel Capacity with Quantized Feedback

Vector Channel Capacity with Quantized Feedback Vector Channel Capacity with Quantized Feedback Sudhir Srinivasa and Syed Ali Jafar Electrical Engineering and Computer Science University of California Irvine, Irvine, CA 9697-65 Email: syed@ece.uci.edu,

More information

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse? Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse? Ronen Eldan, Tel Aviv University (Joint with Bo`az Klartag) Berkeley, September 23rd 2011 The Brunn-Minkowski Inequality

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Nets Hawk Katz Theorem. There existsaconstant C>so that for any number >, whenever E [ ] [ ] is a set which does not contain the vertices of any axis

Nets Hawk Katz Theorem. There existsaconstant C>so that for any number >, whenever E [ ] [ ] is a set which does not contain the vertices of any axis New York Journal of Mathematics New York J. Math. 5 999) {3. On the Self Crossing Six Sided Figure Problem Nets Hawk Katz Abstract. It was shown by Carbery, Christ, and Wright that any measurable set E

More information

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed

More information

8 Singular Integral Operators and L p -Regularity Theory

8 Singular Integral Operators and L p -Regularity Theory 8 Singular Integral Operators and L p -Regularity Theory 8. Motivation See hand-written notes! 8.2 Mikhlin Multiplier Theorem Recall that the Fourier transformation F and the inverse Fourier transformation

More information

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

SECTION FOR DIGITAL SIGNAL PROCESSING DEPARTMENT OF MATHEMATICAL MODELLING TECHNICAL UNIVERSITY OF DENMARK Course 04362 Digital Signal Processing: Solutions to Problems in Proakis and Manolakis, Digital

More information

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge

More information

Construction of Barnes-Wall Lattices from Linear Codes over Rings

Construction of Barnes-Wall Lattices from Linear Codes over Rings 01 IEEE International Symposium on Information Theory Proceedings Construction of Barnes-Wall Lattices from Linear Codes over Rings J Harshan Dept of ECSE, Monh University Clayton, Australia Email:harshanjagadeesh@monhedu

More information

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring, The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed

More information

SYSTEM RECONSTRUCTION FROM SELECTED HOS REGIONS. Haralambos Pozidis and Athina P. Petropulu. Drexel University, Philadelphia, PA 19104

SYSTEM RECONSTRUCTION FROM SELECTED HOS REGIONS. Haralambos Pozidis and Athina P. Petropulu. Drexel University, Philadelphia, PA 19104 SYSTEM RECOSTRUCTIO FROM SELECTED HOS REGIOS Haralambos Pozidis and Athina P. Petropulu Electrical and Computer Engineering Department Drexel University, Philadelphia, PA 94 Tel. (25) 895-2358 Fax. (25)

More information

arxiv: v1 [math.ca] 7 Jan 2015

arxiv: v1 [math.ca] 7 Jan 2015 Inertia of Loewner Matrices Rajendra Bhatia 1, Shmuel Friedland 2, Tanvi Jain 3 arxiv:1501.01505v1 [math.ca 7 Jan 2015 1 Indian Statistical Institute, New Delhi 110016, India rbh@isid.ac.in 2 Department

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

Channel. Feedback. Channel

Channel. Feedback. Channel Space-time Transmit Precoding with Imperfect Feedback Eugene Visotsky Upamanyu Madhow y Abstract The use of channel feedback from receiver to transmitter is standard in wireline communications. While knowledge

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Iterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th

Iterative procedure for multidimesional Euler equations Abstracts A numerical iterative scheme is suggested to solve the Euler equations in two and th Iterative procedure for multidimensional Euler equations W. Dreyer, M. Kunik, K. Sabelfeld, N. Simonov, and K. Wilmanski Weierstra Institute for Applied Analysis and Stochastics Mohrenstra e 39, 07 Berlin,

More information

Applications and fundamental results on random Vandermon

Applications and fundamental results on random Vandermon Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)

More information

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820 Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS

More information

Two-boundary lattice paths and parking functions

Two-boundary lattice paths and parking functions Two-boundary lattice paths and parking functions Joseph PS Kung 1, Xinyu Sun 2, and Catherine Yan 3,4 1 Department of Mathematics, University of North Texas, Denton, TX 76203 2,3 Department of Mathematics

More information

7.1 Sampling and Reconstruction

7.1 Sampling and Reconstruction Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n.

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n. Scientiae Mathematicae Japonicae Online, e-014, 145 15 145 A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS Masaru Nagisa Received May 19, 014 ; revised April 10, 014 Abstract. Let f be oeprator monotone

More information

quant-ph/ Aug 1995

quant-ph/ Aug 1995 Bounds for Approximation in Total Variation Distance by Quantum Circuits quant-ph/9508007 8 Aug 995 LAL report LAUR-95-2724 E. Knill, knill@lanl.gov July 995 Abstract It was recently shown that for reasonable

More information

Average Reward Parameters

Average Reward Parameters Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Linear programming bounds for. codes of small size. Ilia Krasikov. Tel-Aviv University. School of Mathematical Sciences,

Linear programming bounds for. codes of small size. Ilia Krasikov. Tel-Aviv University. School of Mathematical Sciences, Linear programming bounds for codes of small size Ilia Krasikov Tel-Aviv University School of Mathematical Sciences, Ramat-Aviv 69978 Tel-Aviv, Israel and Beit-Berl College, Kfar-Sava, Israel Simon Litsyn

More information

The Heine-Borel and Arzela-Ascoli Theorems

The Heine-Borel and Arzela-Ascoli Theorems The Heine-Borel and Arzela-Ascoli Theorems David Jekel October 29, 2016 This paper explains two important results about compactness, the Heine- Borel theorem and the Arzela-Ascoli theorem. We prove them

More information

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and

An average case analysis of a dierential attack. on a class of SP-networks. Distributed Systems Technology Centre, and An average case analysis of a dierential attack on a class of SP-networks Luke O'Connor Distributed Systems Technology Centre, and Information Security Research Center, QUT Brisbane, Australia Abstract

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that Sampling short lattice vectors and the closest lattice vector problem Miklos Ajtai Ravi Kumar D. Sivakumar IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120. fajtai, ravi, sivag@almaden.ibm.com

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r

and the nite horizon cost index with the nite terminal weighting matrix F > : N?1 X J(z r ; u; w) = [z(n)? z r (N)] T F [z(n)? z r (N)] + t= [kz? z r Intervalwise Receding Horizon H 1 -Tracking Control for Discrete Linear Periodic Systems Ki Baek Kim, Jae-Won Lee, Young Il. Lee, and Wook Hyun Kwon School of Electrical Engineering Seoul National University,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

The Pacic Institute for the Mathematical Sciences http://www.pims.math.ca pims@pims.math.ca Surprise Maximization D. Borwein Department of Mathematics University of Western Ontario London, Ontario, Canada

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Winter Lecture 10. Convexity and Concavity

Winter Lecture 10. Convexity and Concavity Andrew McLennan February 9, 1999 Economics 5113 Introduction to Mathematical Economics Winter 1999 Lecture 10 Convexity and Concavity I. Introduction A. We now consider convexity, concavity, and the general

More information

Computer Science Dept.

Computer Science Dept. A NOTE ON COMPUTATIONAL INDISTINGUISHABILITY 1 Oded Goldreich Computer Science Dept. Technion, Haifa, Israel ABSTRACT We show that following two conditions are equivalent: 1) The existence of pseudorandom

More information

Measuring Ellipsoids 1

Measuring Ellipsoids 1 Measuring Ellipsoids 1 Igor Rivin Temple University 2 What is an ellipsoid? E = {x E n Ax = 1}, where A is a non-singular linear transformation of E n. Remark that Ax = Ax, Ax = x, A t Ax. The matrix Q

More information