Scalar and Vector Quantization

Size: px
Start display at page:

Download "Scalar and Vector Quantization"

Transcription

1 Scalar and Vector Quantzaton Máro A. T. Fgueredo, Departamento de Engenhara Electrotécnca e de Computadores, Insttuto Superor Técnco, Lsboa, Portugal maro.fgueredo@tecnco.ulsboa.pt November 207 Quantzaton s the process of mappng a contnuous or dscrete scalar or vector, produced by a source, nto a set of dgtal symbols that can be transmtted or stored usng a fnte number of bts. In the case of contnuous sources (wth values n R or R n ) quantzaton must necessarly be used f the output of the source s to be communcated over a dgtal channel. In ths case, t s, n general, mpossble to exactly reproduce the orgnal source output, so we re n the context of lossy codng/compresson. In ths lecture notes, we wll revew the man concepts and results of scalar and vector quantzaton. For more detals, see the book by Gersho and Gray [2], the accessble tutoral by Gray [3], or the comprehensve revew by Gray and Neuhoff [4] Scalar Quantzaton. Introducton and Defntons Let us begn by consderng the case of a real-valued (scalar) memoryless source. Such a source s modeled as a real-valued random varable, thus fully characterzed by a probablty densty functon (pdf) f X. Recall that a pdf f X satsfes the followng propertes: f X (x) 0, for any x R, and b a f X (x) dx, f X (x) dx P[X [a, b]], where P[X [a, b]] denotes the probablty that the random varable X takes values n the nterval [a, b]. To avod techncal ssues, n ths text we only consder contnuous pdfs.

2 Consder the objectve of transmttng a sample x of the source X over a bnary channel that can only carry R bts, each tme t s used. That s, we can only use R bts to encode each sample of X. Naturally, ths restrcton mples that we are forced to encodng any outcome of X nto one of N 2 R dfferent symbols (bnary words). Of course, ths can be easly generalzed for D-ary channels (nstead of bnary), for whch the number of dfferent words s D R ; however, to keep the notaton smple, and wthout loss of generalty, we wll only consder the case of bnary channels. Havng receved one of the N 2 R possble words, the recever/decoder has to do ts best to recover/approxmate the orgnal sample x, and t does so by outputtng one of a set of N values. The procedure descrbed n the prevous paragraph can be formalzed as follows. The encoder s a functon E : R I, where I {0,,..., N } s the set of possble bnary words that can be sent through the channel to represent the orgnal sample x. Snce the set I s much smaller than R, ths functon s non-njectve and there are many dfferent values of the argument that produce the same value of the functon; each of these sets s called a quantzaton regon, and s defned as {x R : E(x) }. Snce E s a functon defned over all R, ths defnton mples that the collecton of quantzaton regons (also called cells) R {R 0,..., R } defnes a partton of R, that s, ( j) R j and R. () The decoder s a real-valued functon D : I R; notce that snce the argument of D only takes N dfferent values, and D s a determnstc functon, t can also only take N dfferent values, thus ts range s a fnte set C {y 0,..., y } R. The set C s usually called the codebook. The -th element of the codebook, y, s sometmes called the representatve of the regon/cell. Consderng that there are no errors n the channel, the sample x s reproduced by the decoder as D (E(x)), that s, the result of frst encodng and then decodng x. The composton of the functons E and D defnes the so-called quantzaton functon Q : R C, where Q(x) D (E(x)). The quantzaton functon has the followng obvous property (x ) Q(x) y, (2) whch justfes the term quantzaton. In other other, any x belongng to regon s represented at the output of the system by the correspondng y. A quantzer (equvalently a par encoder/decoder) s completely defned by the set of regons R {R 0,..., R N } and the correspondng representatves C {y 0,..., y } R. If all the cells are ntervals (for example, [a, b [ or [a, [) that contan the correspondng representatve, that s, such that y, the quantzer s called regular. A regular quantzer n whch all the regons have the same length (except two of them, whch may be unbounded on the left and the rght) s called a unform quantzer. For example, the followng set of regons 2

3 and codebook defne a 2-bt (R 2, thus N 4) regular (but not unform) quantzer: R {], 0.3], ] 0.3,.5], ].5, 4[, [4, [} and C {, 0, 2, 5}. As another example, the followng set of regons/cells and codebook defne a 3-bt (R 3, thus N 8) unform quantzer: R {], 0.3], ]0.3,.3], ].3, 2.3], ]2.3, 3.3], ]3.3, 4.3], ]4.3, 5.3], ]5.3, 6.3], ]6.3, [} C {0,, 2, 3, 4, 5, 6, 7}..2 Optmal Quantzers, Lloyd s Algorthm, and the Lnde-Buzo-Gray Algorthm.2. Expected Dstorton Fndng an optmal scalar quantzer conssts n fndng the set of regons, R, and the codebook, C, mnmzng a gven objectve functon, whch measures quantzer performance. Although there are other possbltes, the standard quantty used to assess the performance of a quantzer s the expected dstorton E [d(x, Q(X))] f X (x) d (x, Q(x)) dx, where d : R R s a so-called dstorton measure. Among the several reasonable choces for d, such as d(x, z) x z, the one whch s, by far, most commonly used s the squared error, d(x, z) (x z) 2. Wth the squared error, the expected dstorton becomes the well-known mean squared error (MSE), MSE E [ (X Q(X)) 2] The MSE s also called the quantzaton nose power..2.2 Optmal Quantzer f X (x) (x Q(x)) 2 dx. Adoptng the MSE to measure the quantzer performance, the problem of fndng the optmal set of regons and correspondng representatves becomes ( R opt, C opt) arg mn f X (x) (x Q(x)) 2 dx, (3) R,C under the constrant that the regons that consttute R have to satsfy the condton n (). Because the set of regons consttutes a partton of R (see ()), and because of (2), the ntegral defnng the MSE can be wrtten as MSE(R 0,..., R, y 0,..., y ) f X (x) (x y ) 2 dx, (4) where the notaton MSE(R 0,..., R, y 0,..., y ) s used to emphasze that the mean squared error depends on the quantzaton regons, R 0,..., R and ther representatves y 0,..., y. 3

4 .2.3 Partal Solutons It s, n general, extremely hard to fnd the global mnmzer of MSE(R 0,..., R, y 0,..., y ), smultaneously wth respect to all the regons and representatves. However, t s possble to solve two partal problems: Gven the quantzaton regons regons R {R 0,..., R }, fnd the correspondng optmal codebook, {y 0,..., y } arg mn y 0,...,y Gven a codebook C {y 0,..., y }, fnd the optmal regons, f X (x) (x y ) 2 dx. (5) {R0,..., R} arg mn f X (x) (x Q(x)) 2 dx (6) R0,...,R subject to ( j) R j (7) R. (8) Let us start by examnng (5); observe that the functon beng mnmzed s the sum of N non-negatve functons, and each of these functons only depends on one element of C. Consequently, the problem can be solved ndependently wth respect to each y, that s, y arg mn f X (x) (x y) 2 dx. y Expandng the square n the ntegrand leads to [ ] y arg mn f X (x) x 2 dx + y 2 f X (x) dx 2 y f X (x) x dx (9) y R ] arg mn [y 2 f X (x) dx 2 y f X (x) x dx, (0) y where the second equalty s due to the fact that the frst term n the rght hand sde of (9) does not depend on y, thus t s rrelevant for the mnmzaton. The mnmum s found by computng the dervatve wth respect to y, whch s d d y ] [y 2 f X (x) dx 2 y f X (x) x dx 2 y f X (x) dx 2 f X (x) x dx and equatng t to zero, whch leads to the followng equaton y f X (x) dx f X (x) x dx, 4

5 the soluton of whch s y f X (x) x dx R. () f X (x) dx Ths expresson for the optmal representatve of regon has a clear probablstc meanng. Observe that the condtonal densty of X, condtoned by the event A (X ) s, accordng to Bayes law, f X A (x A ) f X,A (x, A ) P[A ] P[A x]f X (x) P[A ] (x)f X (x), P[X ] where R (x), f x, and R (x) 0, f x, s called the ndcator functon of regon. Computng the expected value of X, condtoned by the event that A (X ), E[X X ] x f X A (x A ) dx P[X ] x f X (x) R (x) dx x f X (x) dx R, (2) f X (x) dx whch s exactly expresson (). Ths shows that the optmal representatve of the cell s the condtonal expected value of the random varable X, gven that X s n. A more physcal nterpretaton of () s that t s the center of (probablstc) mass of regon. Let us now examne problem (6) (8), where we seek the optmal regons, gven a codebook C {y 0,..., y }. Notce that the fact that there s no restrcton on the form of the regons (apart from those n (7) and (8)) means that choosng the regons s the same as selectng, for each x, what s ts best representatve among the gven {y 0,..., y }. In mathematcal terms, ths can be wrtten as the followng nequalty f X (x) (x Q(x)) 2 dx f X (x) mn(x y ) 2 dx; that s, snce the codebook {y 0,..., y } s fxed, the best possble encoder s one that chooses, for each x, the closest representatve. In concluson, the optmal regons are gven by {x : (x y ) 2 (x y j ) 2, j }, for 0,..., N, (3) that s, s the set of ponts that are closer to y than to any other element of the codebook..2.4 The Lloyd Algorthm The Lloyd algorthm for quantzer desgn works by teratng between the two partal solutons descrbed above. 5

6 { Step : Gven the current codebook C (t) y (t) 0 },..., y(t), obtan the optmal regons R (t) {x : (x y (t) ) 2 (x y (t) j ) 2, j }, for 0,..., N ; { } Step 2: Gven the current regons R (t) R (t) 0,..., R(t), update the representatves f X (x) x dx y (t+) R (t), for 0,..., N ; f X (x) dx R (t) Step 3: Check some stoppng crteron; f t s satsfed, stop; f not, set t t +, and go back to Step. A typcal stoppng crteron would be to check f the maxmum dfference between two consecutve values of codebook elements s less than some threshold; that s, the algorthm would be stopped f the followng condton s satsfed max(y (t) y (t+) ) 2 ε. (4) Under certan condtons, Lloyd s algorthm converges to the global soluton of the optmzaton problem (3); however, these condtons are not trval and way beyond the scope of these lecture notes. In fact, the convergence propertes of the Lloyd algorthm are a topc of current actve research; the nterested reader may look at the recent paper by Du, Emelanenko, and Ju []..2.5 Zero Mean Quantzaton Error of Lloyd Quantzers Algorthms obtaned by the Lloyd algorthm satsfy smultaneously the partal optmalty condtons () and (3) and are called Lloyd quantzers. These quantzers have the mportant property that the expected value of the quantzaton error s zero, that s, E [Q(X) X] 0, or, equvalently, E[Q(X)] E[X]. To show ths, we wrte E [Q(X)] f x (x) Q(x) dx (5) y f x (x) dx (6) x f x (x) dx f x (x) dx f x (x) dx (7) xf x (x) dx (8) xf x (x) dx (9) E[X]. (20) 6

7 .2.6 The Lnde-Buzo-Gray Algorthm Very frequently, nstead of knowledge of the pdf of the source, f X (x), what we have avalable s a set of samples X {x,..., x n }, where n s usually (desrably) a large number. In ths scenaro, the optmal quantzer wll have to be obtaned (learned) from these samples. Ths s what s acheved by the Lnde-Buzo-Gray (LBG) algorthm, whch s a sample verson of the Lloyd algorthm. The algorthm s defned as follows. { Step : Gven the current codebook C (t) R (t) j y (t) 0,..., y(t) }, obtan the optmal regons {x : (x y (t) j ) 2 (x y (t) k )2, k j}, for j 0,..., N ; { Step 2: Gven the current regons R (t) where n (t) j X R (t) j y (t+) j : x R (t) j n (t) j R (t) 0 x,..., R(t) }, update the representatves, for j 0,..., N, s the number of samples n R(t) Step 3: Check some stoppng crteron; f t s satsfed, stop; f not, set t t +, and go back to Step. As n the Lloyd algorthm, a typcal stoppng crteron has the form (36). Notce that we don t need{ to explctly defne } the regons, but smply to assgn each pont to one of the current regons R (t) 0,..., R(t). That s, the Step of the LBG algorthm can be wrtten wth the help of ndcator varables w j, for,..., n, and j 0,..., N, defned as follows: { ( ) } w j j arg mn x y (t) 2 k k, k,..., N, that s w j equals one f and only f x s closer to y j than to any other other element of the current codebook; otherwse, t s zero. Wth these ndcator varables, the Step 2 of the LBG algorthm can be wrtten as y (t+) j n x w j j., for 0,..., N, n w j that s, the updated j-th element of the codebook s smply the mean of all the samples that currently are n regon R (t). 7

8 .3 Hgh-Resoluton Approxmaton Although there s an algorthm to desgn scalar quantzers, gven the probablty densty functon of the source (Lloyd s algorthm), or a set of samples (LBG algorthms), the most commonly used quantzers are unform and of hgh resoluton (large N). It s thus mportant to be able to have a good estmate of the performance of such quantzers, whch s possble usng the so-called hgh-resoluton approxmaton..3. Unform Quantzers In unform quantzers, all the regons are ntervals wth the same wdth, denoted. Of course, f X s unbounded (for example, X R and Gaussan) t s not possble to cover R wth a fnte number of cells of fnte wdth. However, we assume that we have enough cells to cover the regon of of R where f X (x) s not arbtrarly close to zero. For example, f X R and f X (x) s a Gaussan densty of zero mean and varance σ 2, we may consder that X s essentally always n the nterval [ 4 σ, 4 σ], snce the probablty that X belongs to ths nterval s The hgh-resoluton approxmaton assumes that s small enough so that f X (x) s approxmately constant nsde each quantzaton regon. Under ths assumpton, the optmal representatve for each regon s ts central pont, thus [y /2, y + /2[, and the MSE s gven by MSE y + /2 y /2 y + /2 f X (y ) f X (x) (x y ) 2 dx y /2 y + /2 f X (y ) y /2 (x y ) 2 dx (x y ) 2 dx. (2) Makng the change of varables z x y n each of the ntegrals, they all become equal to y + /2 /2 y /2 (x y ) 2 z 2 2 dx dz /2 2 ; nsertng ths result n (2), and observng that f X (y ) P[X ] p we obtan MSE 2 p 2 2 2, (22) snce p. If the wdth of the (effectve) support of f X (x) s, say A, the number of cells N s gven by N A/. Recallng that N 2 R, we have MSE A2 2 2 R, (23) 2 8

9 showng that each addtonal bt n the rate R produces an MSE reducton by a factor of 4. In terms of sgnal to (quantzaton) nose rato (SNR), we have SNR 0 log 0 σ 2 MSE db where σ 2 denotes the source varance. Usng the expresson above for the MSE, we have SNR 0 log 0 σ 2 2 A 2 }{{} K + R 20 log 0 2 (K + 6 R) db, }{{} 6.0 showng that each extra bt n the quantzer acheves an mprovement of approxmately 6 db n the quantzaton SNR. Notce that all the results n ths subsecton are ndependent of the partcular features (such as the shape) of the pdf f X (x)..3.2 Non-unform Quantzers In non-unform hgh-resoluton quantzers, the wdth of each cell s, but t s stll assumed that s small enough so that f X (x) s essentally constant nsde the cell. Under ths assumpton, the optmal representatve for regon s ts central pont, thus we can wrte [y /2, y + /2[, and the MSE s gven by MSE y + /2 y /2 y + /2 f X (y ) f X (x) (x y ) 2 dx y /2 f X (y ) y + /2 y /2 (x y ) 2 dx (x y ) 2 dx. (24) Makng the change of varables z x y n each ntegral leads to y + /2 y /2 (x y ) 2 /2 z 2 dx dz 2 /2 2 ; nsertng ths result n (24), and observng that f X (y ) P[X ] p we obtan MSE p 2 2. naturally, (22) s a partcular case of the prevous expresson, for. 9

10 .4 Entropy of the Output of a Scalar Encoder The output of the encoder, I E(X), can be seen as a dscrete memoryless source, producng symbols from the alphabet I {0,,..., N }, wth probabltes p P[X ] f X (x) dx, for 0,,..., N. The entropy of E(X) provdes a good estmate of the mnmum number of bts requred to encode the output of the encoder, and (as wll be seen below) wll provde a codng theoretcal nterpretaton to the dfferental entropy of the source X. The entropy of I s gven by H(I) p log p ( ) f X (x) dx ( ) log f X (x) dx ; f nothng else s know about the pdf f X (x), t s not possble to obtan any smpler exact expresson for H(I). However, we can make some progress and obtan some nsght by focusng on unform quantzers and adoptng (as n Secton.3) the hgh-resoluton approxmaton In the hgh-resoluton regme of unform quantzers (very large N, thus very small ), the probablty of each cell p P[X ] can be approxmated as p f X (y ), because s small enough to have f X (x) approxmately constant nsde, and y s (approxmately) the central pont of. In these condtons, H(I) f X (y ) log ( f X (y )), wth the approxmaton beng more accurate as becomes smaller. The expresson above can be wrtten as H(I) f X (y ) log (f X (y )) } {{ } h(x) log f x (y ) }{{} p } {{ } h(x) log, where the frst sum s approxmately equal to h(x) because as approaches zero, the sum approaches the Remmen ntegral of f X (x) log f X (x). In concluson, the entropy of the output of a hgh-resoluton unform quantzaton encoder (n the hgh-resoluton regme) s approxmately equal to the dfferental entropy of the source, plus a term whch depends on the precson (resoluton) wth whch the samples of X are represented (quantzed). Notce that as becomes small, the term log ncreases. If the output of the encoder s followed by an optmal entropc encoder (for example, usng a Huffman code), the average number of bts, L, used to encode each sample wll be close to H(I), that s L H(I). 0

11 The average rate L and the MSE are related through the par of equaltes these can be rewrtten as L h(x) log and MSE 2 2 ; L h(x) log 2 2 log MSE and MSE 2 2(2 h(x) 2 L), showng that the average bt rate decreases logarthmcally wth the ncrease of the MSE and, conversely, the MSE decreases exponentally wth the ncrease of the average bt rate. Let us llustrate the results derved n the prevous paragraph wth a couple of smple examples. Frst, consder a random varable X wth a unform densty on the nterval [a, b], that s f X (x) /(b a), f x [a, b], and zero otherwse. The dfferental entropy of X s h(x) log(b a), thus, L H(I) log(b a) log log(b a) log b a N log(2 R ) R bts/sample, where all the logarthms are to base 2. The expresson above means that, for a unform densty, the average number of bts per sample of a unform hgh-resoluton quantzer equals smply the quantzer rate R. Ths s regardless of the support of the densty. Now consder a trangular densty on the nterval [0, ], that s, f X (x) 2 2x, for x [0, ]. In ths case, t s easy to show that thus h(x) 0 (2 2x) log 2 (2 2x) dx 2 log 2 e , L log log N log(2 R ) R bts/sample. Ths example shows that f the densty s not unform on ts support, then the average number of requred bts per sample (after optmal entropc codng of the unform hgh-resoluton quantzaton encoder output) s less than the quantzer rate. Ths s a smple consequence of the fact that f the densty s not unform, then the cell probabltes {p 0,..., p } are not equal and the correspondng entropy s less than log N. However, notce that we are n a hgh-resoluton regme, thus N and the decrease n average bt rate caused by the non-unformty of the densty s relatvely small. 2 Vector Quantzaton 2. Introducton and Defntons In vector quantzaton the nput to the encoder, that s, the output of the source to be quantzed, s not a scalar quantty but a vector n R n. Formally, the source s modeled as a vector random

12 varable X R n, characterzed by a pdf f X (x). Any pdf defned on R n has to satsfy the followng propertes: f X (x) 0, for any x R n, f X (x) dx, R n and R f X (x) dx P[X R], where P[X R] denotes the probablty that the random varable X takes values n some set R R n. To avod techncal ssues, we consder only contnuous pdfs. In the vector case, the encoder s a functon E : R n I, where I {0,,..., N }. As n the scalar case, ths functon s non-njectve and there are many dfferent values of the argument that produce the same value of the functon; each of ths sets s called a quantzaton regon (or cell), and s defned as {x R n : E(x) }. Snce E s a functon defned over all R n, ths defnton mples that the collecton of quantzaton regons/cells R {R 0,..., R } defnes a partton of R n, that s, ( j) R j and R n. (25) The decoder s a functon D : I R n ; as n the scalar case, snce the argument of D only takes N dfferent values, and D s a determnstc functon, t can also only take N dfferent values, thus ts range s a fnte set C {y 0,..., y } R n. The set C s stll called the codebook. The -th element of the codebook, y, s the representatve of the regon/cell. Consderng that there are no errors n the channel, the sample x s reproduced by the decoder as D (E(x)), that s, the result of frst encodng and then decodng x. The composton of the functons E and D defnes the so-called vector quantzaton functon Q : R n C, where Q(x) D (E(x)). As n the scalar case, the quantzaton functon has the followng obvous property (x ) Q(x) y. (26) Smlarly to the scalar case, a vector quantzer (VQ) (equvalently a par encoder/decoder) s completely defned by the set of regons R {R 0,..., R N } and the correspondng codebook C {y 0,...y } R n. A VQ n whch all the cells are convex and contan ts representatve s called a regular VQ. Recall that a set S s sad to be convex f t satsfes the condton x, y S λx + ( λ)y S, for any λ [0, ]; n words, a set s convex when the lne segment jonng any two of ts ponts also belongs to the set. Observe that ths defnton covers the scalar case, snce the only type of convex sets n R are ntervals (regardless of beng open or close). Fgure llustrates the concepts of convex and non-convex sets. 2

13 Fgure : A convex set (left) and a non-convex set (rght). 2.2 Optmal VQs, Lloyd s Algorthm, and the Lnde-Buzo-Gray Algorthm Ths subsecton s parallel to Secton.2, essentally repeatng all the concepts and dervatons, adapted to the vectoral case Expected Dstorton and the Optmal VQ Fndng an optmal VQ conssts n fndng the set of regons, R, and the codebook, C, that mnmzes a gven objectve functon. Although there are other optons, the standard choce s the MSE MSE n E [ X Q(X) 2] f X (x) x Q(x) 2 dx, R n where v 2 n v2 denotes the usual squared Eucldean norm of some vector v R n. The factor /n n the defnton of the MSE of a VQ n R n makes t a measure of average quadratc error per coordnate. Adoptng the MSE to measure the quantzer performance, the problem of fndng the optmal set of regons and correspondng representatves becomes ( R opt, C opt) arg mn R,C f X (x) x y 2 dx, (27) whch s smlar to (3)-(4), but here for the vectoral case; we are gnorng the /n factor, whch s rrelevant for the mnmzaton Partal Solutons As n the scalar case, t s possble to solve the two partal problems: Gven the quantzaton regons regons R {R 0,..., R }, fnd the correspondng optmal codebook, {y 0,..., y } arg mn y 0,...,y 3 f X (x) x y 2 dx. (28)

14 Gven a codebook C {y 0,..., y }, fnd the optmal regons, {R0,..., R} arg mn f X (x) x Q(x) 2 dx (29) R0,...,R subject to ( j) R j (30) R. (3) In (28), the functon beng mnmzed s the sum of N non-negatve functons, each one of them only dependent on one of the y. The problem can be decoupled nto N ndependent problems y arg mn f X (x) x y 2 y 2 dx. Expandng the squared Eucldean norm nto x y 2 2 x y x, y, leads to [ ] y arg mn f X (x) x 2 y 2 dx + y 2 2 f X (x) dx 2 f X (x) y, x dx (32) R [ ] arg mn y 2 y 2 f X (x) dx 2 y, f X (x) x dx, (33) where the second equalty s due to the fact that the frst term n (32) does not depend on y, thus t s rrelevant for the mnmzaton, and the nner product commutes wth ntegraton (snce both are lnear operators). The mnmum s found by computng the gradent wth respect to y and equatng to zero. Recallng that v v 2 2v and v v, b b, we have ] y [ y 2 2 f X (x) dx 2 y f X (x) x dx 2 y f X (x) 2 f X (x) x dx Equatng to zero, leads to the followng equaton y f X (x) f X (x) x dx, the soluton of whch s y f X (x) x dx R. (34) f X (x) dx As n the scalar case, (34) has a clear probablstc meanng: t s the condtonal expected value of the random varable X, gven that X s n. A more physcal nterpretaton of (34) s that t s the center of (probablstc) mass of regon. The partal problem (29) has smlar soluton to (6): gven a codebook C {y 0,..., y }, the best possble encoder s one that chooses, for each x, the closest representatve. In concluson, the optmal regons are gven by {x : x y 2 x y j 2, j }, for 0,..., N, (35) 4

15 Fgure 2: Example of Vorono regons for a set of ponts n R 2. that s, s the set of ponts that are closer to y than to any other element of the codebook. Whereas n the scalar case these regons where smply ntervals, n R n the optmal regons may have a more complex structure. The N regons that partton R n accordng to (35) are called the Vorono regons (or Drchlet tessellaton) correspondng to the set of ponts {y 0,..., y }. An mportant property of Vorono regons (the proof s beyond the scope of ths text) s that they are necessarly convex, thus a Lloyd vector quantzer s necessarly regular. Fgure 2 llustrates the concept of Vorono regons n R The Lloyd Algorthm The Lloyd algorthm for VQ desgn works exactly as the scalar counterpart. { } Step : Gven the current codebook C (t),..., y(t), obtan the optmal regons R (t) y (t) 0 {x : x y (t) 2 x y (t) j 2, j }, for 0,..., N ; { } Step 2: Gven the current regons R (t),..., R(t), update the representatves f X (x) x dx y (t+) R (t), f X (x) dx for 0,..., N ; R (t) R (t) 0 Step 3: Check some stoppng crteron; f t s satsfed, stop; f not, set t t +, and go back to Step. A typcal stoppng crteron would be to check f the maxmum squared dstance between two consecutve postons of codebook elements s less than some threshold; that s, the algorthm would be stopped f the followng condton s satsfed max y (t) y (t+) 2 ε. (36) 5

16 2.2.4 Zero Mean Quantzaton Error of Lloyd Quantzers The property of scalar Lloyd quantzers shown n Subsecton.2.5 (that the quantzaton error has zero mean) s stll vald n the vectoral case. Notce that the dervaton carred out n Subsecton.2.5 can be drectly appled n R n. Thus, t s stll true that for a VQ that satsfes the condtons (34) and (35), called Lloyd VQs, the mean of the quantzaton error s zero, that s, E [Q(X) X] 0, or, equvalently, E[Q(X)] E[X] The Lnde-Buzo-Gray Algorthm The Lnde-Buzo-Gray algorthm for the vector case has exactly the same structure as n the scalar case, so t wll not be descrbed agan. The only practcal detal whch s sgnfcantly dfferent n the vectoral case s an ncreased senstvty to ntalzaton; thus, when usng ths algorthm to obtan a VQ, care has to be taken n choosng the ntalzaton of the algorthm. For further detals on ths and other aspects of the LBG algorthm, the nterested reader s referred to [2]. 2.3 Hgh-Resoluton Approxmaton 2.3. General Case As n the scalar case, t s possble to obtan approxmate expressons for the MSE of hghresoluton VQs, from whch some nsght nto ther performance may be obtaned. In the hgh-resoluton regme, just as n the scalar case, the key assumpton s that the regons/cells are small enough to allow approxmatng the pdf of X by a constant nsde each regon. Wth ths approxmaton, the MSE can be wrtten as MSE n n n f X (x) x y 2 dx f X (y ) x y 2 dx f X (y )V V x y 2 dx. (37) where V V ( ) dx s the volume (area, n R 2, length n R) of regon. Notcng that p P[X ] f X (y )V, we have MSE x y 2 dx R p p x y 2 dx. (38) n n V dx Unlke n the scalar case, where the quantty multplyng each p can be shown to be 2 /2, the nvolved ntegraton not always has closed form expressons, or can even be computed exactly. 6

17 However, f we are n the presence of a Lloyd quantzer, y s the center of mass of regon, thus the quantty x y 2 dx V (39) can be recognzed as the the moment of nerta of the regon about ts center of mass, f the total mass s one and the densty s unform Unform VQ To make some progress, we now assume that we are n the presence of a unform VQ, that s, such that all the regons have a smlar shape and sze; n other words, the regons R 0, R,..., R only dffer from each other by a shft of locaton. In ths condton, t clear that the value of both the numerator and the denomnator of (39) s the same for all cells: the denomnator s smply the volume, whch of course does not depend on the locaton; the numerator, after the change of varable z x y, can be wrtten, for any, as x y 2 dx z 2 dz, R where R denotes a regon wth the same volume and shape as all the s, but such that the center of mass s at the orgn. The MSE expresson thus smplfes to MSE x 2 dx n V (R) R N p x 2 dx. (40) n V (R) R The expresson (40) shows that the MSE of a hgh-resoluton unform VQ depends only on the volume and the shape of the quantzaton cells. Ths can be made even more explct by re-wrtng t as ) 2/n MSE V (R)2/n n ( V (R) R x 2 dx V (R) }{{} depends only on the shape V (R)2/n M(R), (4) n where the second factor, denoted M(R), depends only on the shape (not the volume, as we wll prove next) and the frst factor, V (R) 2/n, depends only on the volume. To prove that M(R), called the normalzed moment of nerta, s ndependent of the volume, we show that t s nvarant to a change of scale, that s, M(cR) M(R), for any c R + : M(cR) ( ) 2/n x 2 dx (42) V (cr) V (cr) cr ( ) 2/n c n V (R) c n cz 2 c n dz (43) V (R) R 7

18 ( ) 2/n c 2 c n c 2+n z 2 dz V (R) V (R) R }{{} M(R) (44) M(R). (45) The volume of the regons (whch s the same for all regons n a unform quantzer) depends only on the number of regons and on the volume of the support of f X (x), denoted V (B). Of course, for a source X wth unbounded support (for example, a Gaussan), the support s the whole space B R n, and ths reasonng does not apply exactly. However, as n the scalar case, we can dentfy some regon outsde of whch the probablty of fndng X s arbtrarly small, and consder that as the support B. For a gven support, the regon volume V (R) wll be smply the total volume of the support, dvded by the number of regons, that s, Insertng ths expresson n (4) leads to V (R) V (B) N V (B) 2 R. (46) MSE n V (B) 2/n 2 2R/n M(R), (47) showng that, as n the scalar case (see (23)), the MSE also decreases exponentally wth R. In the scalar case, we have n and (46) becomes smlar to (23) MSE V (B) 2 2 2R M(R), where we dentfy the volume of the support as V (B) A and M(R) /2. However, for n >, the MSE decreases slower as ncreases, snce the exponent s 2R/n; for example, n R 2, each extra bt only decreases the MSE by a factor of 2 (nstead of 4 n the scalar case); as another example, n R 20, we have 2 /0.078, thus each extra bt only reduces the MSE by a factor of approxmately.078. In logarthmc unts, we can wrte (as n Secton.3.) SNR (K + (6/n)R) db showng that each extra bt n the quantzer resoluton, acheves an mprovement of approxmately (6/n)dB n the quantzaton SNR. Concernng the shape factor M(R), there s a crucal dfference between the scalar case (n ) and the vector case (n > ). In the scalar case, the only possble convex set s an nterval, and t s easy to verfy that M(R) /2. However, for n >, we have some freedom n choosng the shape of the quantzaton cells, under the constrant that ths shape allows a partton (or tesselaton) of the support S Optmal Tesselatons After decouplng the hgh-resoluton approxmaton of the MSE nto a factor that depends only on the volume of the regons (thus on the number of regons) and another factor that depends only on the shape, we can concentrate on studyng the effect of the regon shapes. 8

19 It s known that the shape wth the smallest moment of nerta, for a gven mass and volume, s a sphere (a crcle, n R 2 ). Although sphercal regons can not be used, because they do not partton the space, they provde a lower bound on the moment of nerta. Let us thus compute the factor M(R), when s a sphere (a crcle) n R 2 ; snce, as seen above, ths quantty does not depend on the sze of the regon, we consder unt radus. The volume of a sphere of unt radus n R n, denoted S n, s known to be V (S n ) πn/2 Γ ( n 2 + ), where Γ denote s Euler s gamma functon. For n 2, snce Γ(2), we obtan V (S 2 ) π, whch s the well known area of a unt crcle. For n 3, snce Γ(3/2) 3 π/4, we obtan the also well-known volume of a 4-dmensonal sphere, V (S 3 ) 4π/3. The other quantty needed to obtan M(S 2 ) (see (44)) s S 2 z 2 dz, whch s more convenent to compute n polar coordnates, that s, 2π z 2 dz ρ 2 ρ dρ dθ (48) C π 0 ρ 3 dρ (49) π 2. (50) Pluggng these results nto the defnton of M(S 2 ) (see (44)), we fnally obtan M(S 2 ) V (S 2 ) 2 z 2 dz (5) S 2 2π Let us now compute M(C 2 ), where C n denotes the cubc regon of unt sde n R n ; for n 2, ths s a square of unt sde. Of course, V (C n ), for any n, snce the volume of a cube of sde d n R n s smply d n. As for the quantty C 2 z 2 dz, the ntegraton can be carred out easly as follows: z dz z 2 + z2 2 dz dz 2 (52) C z 2 2 dz dz z 2 2 dz dz 2 (53) z 2 dz (54) 6. (55) 9

20 Snce V (C 2 ), we have M(C 2 ) / , showng that, as expected, usng square quantzaton regons leads to a hgher quantzaton nose than what would be obtaned f crcular regons could be used (whch they can not). The fundamental questons are: s there any other shape wth whch t s possble to cover R n and whch leads to a smaller MSE (that s, has a lower moment of nerta)? s there an optmal shape? In R 2, the answer to these questons s postve: yes, the optmal shape s a regular hexagon. For general R n, wth n > 2, the answer to these questons s stll an open problem. The proof of optmalty of the hexagonal VQ s beyond the scope of ths text; however, we can compute M(H), where H denotes an hexagonal regon centered at the orgn, and confrm that t s lower than M(C 2 ) but larger than M(S 2 ). Snce M(H) does not depend on the sze of H, we consder an hexagon wth unt apothem, h (recall that the apothem s the dstance from the center to the md-pont of one of the sdes). In ths case, usng the well-known formula for the area of a regular polygon as a functon of the apothem, ( π ) V (H) h 2 6 tan Fnally, to compute the ntegral of z 2 over the hexagon, we notce that ths functon has crcular symmetry and that the hexagon can be splt nto 2 smlar trangles, one of whch s gven by T {z (z, z 2 ) : 0 z and 0 z 2 z / 3}. Consequently, H z 2 dz 2 2 z / z z 2 / z 3 dz 3 0 }{{} / z 2 + z 2 2 dz 2 dz (56) }{{} z / 3 dz 2 dz z 3 dz 0 } {{ } /4 z / 3 Combnng ths quantty wth the volume V (H) 6/ 3, we fnally have M(H) V (H) 2 z 2 dz H ( z2 2 dz 2 dz (57) 0 }{{} z 3/(9 3) ) Comparng ths value wth the prevous ones (M(S 2 ) and M(C 2 ) ), we can conclude that the hexagonal VQ s ndeed better than the cubcal one, wth a normalzed moment of nerta only 0.7% larger than that of a crcle (whch can t be used, as explaned above). (58) (59) 20

21 References [] Q. Du, M. Emelanenko, and L. Ju, Convergence of the Lloyd algorthm for computng centrodal Vorono tessellatons, SIAM Journal on Numercal Analyss, vol. 44, no., pp. 02 9, [2] A. Gersho and R. Gray, Vector Quantzaton and Sgnal Compresson. Kluwer Acadmc Publshers, 992. [3] R. Gray, Vector quantzaton, Acoustcs, Speech, and Sgnal Processng Magazne, vol., no. 2, 984. [4] R. Gray and D. Neuhoff, Quantzaton, IEEE Transactons on Informaton Theory, vol. 44, no. 6, pp ,

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

What would be a reasonable choice of the quantization step Δ?

What would be a reasonable choice of the quantization step Δ? CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

EGR 544 Communication Theory

EGR 544 Communication Theory EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Solutions to Problem Set 6

Solutions to Problem Set 6 Solutons to Problem Set 6 Problem 6. (Resdue theory) a) Problem 4.7.7 Boas. n ths problem we wll solve ths ntegral: x sn x x + 4x + 5 dx: To solve ths usng the resdue theorem, we study ths complex ntegral:

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Week 9 Chapter 10 Section 1-5

Week 9 Chapter 10 Section 1-5 Week 9 Chapter 10 Secton 1-5 Rotaton Rgd Object A rgd object s one that s nondeformable The relatve locatons of all partcles makng up the object reman constant All real objects are deformable to some extent,

More information

Asymptotic Quantization: A Method for Determining Zador s Constant

Asymptotic Quantization: A Method for Determining Zador s Constant Asymptotc Quantzaton: A Method for Determnng Zador s Constant Joyce Shh Because of the fnte capacty of modern communcaton systems better methods of encodng data are requred. Quantzaton refers to the methods

More information

Math1110 (Spring 2009) Prelim 3 - Solutions

Math1110 (Spring 2009) Prelim 3 - Solutions Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

3.1 ML and Empirical Distribution

3.1 ML and Empirical Distribution 67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Introduction to Information Theory, Data Compression,

Introduction to Information Theory, Data Compression, Introducton to Informaton Theory, Data Compresson, Codng Mehd Ibm Brahm, Laura Mnkova Aprl 5, 208 Ths s the augmented transcrpt of a lecture gven by Luc Devroye on the 3th of March 208 for a Data Structures

More information

Signal space Review on vector space Linear independence Metric space and norm Inner product

Signal space Review on vector space Linear independence Metric space and norm Inner product Sgnal space.... Revew on vector space.... Lnear ndependence... 3.3 Metrc space and norm... 4.4 Inner product... 5.5 Orthonormal bass... 7.6 Waveform communcaton system... 9.7 Some examples... 6 Sgnal space

More information

Expectation propagation

Expectation propagation Expectaton propagaton Lloyd Ellott May 17, 2011 Suppose p(x) s a pdf and we have a factorzaton p(x) = 1 Z n f (x). (1) =1 Expectaton propagaton s an nference algorthm desgned to approxmate the factors

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Approximate Smallest Enclosing Balls

Approximate Smallest Enclosing Balls Chapter 5 Approxmate Smallest Enclosng Balls 5. Boundng Volumes A boundng volume for a set S R d s a superset of S wth a smple shape, for example a box, a ball, or an ellpsod. Fgure 5.: Boundng boxes Q(P

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Edge Isoperimetric Inequalities

Edge Isoperimetric Inequalities November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information