TELE4652 Mobile and Satellite Communication Systems

Size: px
Start display at page:

Download "TELE4652 Mobile and Satellite Communication Systems"

Transcription

1 TELE4652 Moble and Satellte Communcaton Systems Lecture 7 Equalsaton, Dversty, and Channel Codng In ths lecture we ll loo at three complementary technologes to allow us to obtan hgh qualty transmsson over the rado nterface, and whose development was crucal to the success of modern dgtal cellular networs. The frst of these s equalsaton, a famly of technques that mplement an adaptve flter at the recever to compensate for the nter-symbol nterference (ISI) ntroduced by the multpath delay spread n a hgh speed moble channel. Then we wll consder dversty, a set of technques to dentfy the ndvdual and ndependent multpath components and somehow combne them n such a way to obtan a stronger sgnal at the recever. Then fnally we wll tae a bref excurson nto the vast feld of channel codng, where addtonal party bts are nserted nto the transmtted data stream to facltate error detecton and error correcton. Wthout these three technques to mprove the performance of the rado ln, t s nconcevable that moble cellular networs would have reached the hgh level of sophstcaton and performance that they have today. Equalsaton As we dscussed n the lecture on rado channel modellng, whenever the RMS delay spread due to mult-pathng s larger than the symbol perod, σ τ > Ts, there wll be nter-symbol nterference. We classfy such a channel as frequency-selectve fadng, snce the coherence bandwdth of the channel s smaller than the sgnal bandwdth, so the varous frequency components n the sgnal wll be attenuated by dfferent amounts n ther passage through the channel. Wth the growng demand for ever ncreasng data rates over the ar nterface our channel are nevtablty frequency selectve, and the resultant ntersymbol nterference s somethng that we must lve wth. The effect of ths ISI on the ablty of the recever to correctly recover the data can be dsastrous. The overlap of data symbols produces a nose and error floor detecton, and equalsaton s the term coned for a collecton of technques to remove ths ISI and as a result mprove the recever s nose performance. The name comes from the analogous operaton n audo engneerng, snce the equalser can be thought of as a flter whch re-balances the frequency components n the sgnal, whose relatve ampltudes have been dstorted by the frequency selectve channel, bac to the orgnal transmtted spectral balance. The dagram llustrates the equalser as a channel nverse flter. There are a couple of deas that we can tae from ths representaton. The frst s that, snce the moble channel s tme-varyng, understood from the Doppler spread and quantfed as the channel coherence tme, the equalser must be adaptve too. Thus, all practcal

2 equalsers are adaptve equalsers, where the equalser flter changes n response to changes n the rado channel. The standard way to realse an adaptve equalser s through tranng and tracng. Frstly, the transmtter sends a nown, pre-defned sequence, called a tranng sequence to the recever. Ths s called tranng. The recever obtans the tranng sequence after t has passed over the rado channel, and by comparng the receved tranng sequence to what t nows was sent t can determne the propertes of the channel, and update ts equalser flter accordngly. Ths s called tracng. The basc structure of an adaptve equalser s shown n the dagram below. The characterstcs of the rado channel, expressed as the coherence tme and coherence bandwdth, determne how adaptve equalsaton must be performed. The channel coherence tme determnes how often the tranng sequence must be sent and the equalser flter updated, as the channel coherence tme quantfes the rate at whch the moble channel changes. The rms delay spread determnes how long the tranng sequence must be t must be long enough to see the longest multpath component of the channel, so t can compensate for ts effect. Clearly adaptve equalsaton nvolves a cost n terms of rado resources some bandwdth must be sacrfced and devoted to ths tranng sequence. However the performance gan n removng ths ISI at the recever maes ths cost n resources more than worthwhle.

3 The other mportant nsght s that smply mplementng an nverse flter for the rado channel would be dsastrous n terms of nose enhancement. To see ths, let S ( f ) be the spectrum of the transmtted sgnal and H ( f ) be the channel transfer functon. Then the spectrum of the receved sgnal, along wth addtve nose, could be wrtten as Y ( f ) = H ( f ) S( f ) + N( f ) The nverse channel flter equalser s then, H eq ( f ) = H ( f ) The output of the equalser s then H eq ( f ) Y ( f ) = S( f ) + N ( f ) where the nose at the output of the equalser s coloured wth power spectral densty N S N ( f ) = 2 H ( f ) 2 The output nose power wll be very large at nay frequences for whch the channel has spectral nulls. Thus, mplementng a successful equalser s a lttle more complcated than smply performng an nverse channel flter at the recever. A tradeoff must be found between ISI removal and nose performance. In modern dgtal communcaton systems the equalser s performed dgtally, actng as a dgtal flter on a symbol by symbol bass. In ts smplest manfestaton t s a FIR flter at the recever. The nputs to the equalser s the receved symbol sequence, { y }, fed from an A/D and demodulator, but pror to the decson maer. Due to ISI each of these receved symbols wll contan contrbutons from earler transmtted and possbly later transmtted symbols. The equalser could then remove ths ISI by removng some lnear combnaton of earler and later receved symbols, xˆ n = M = L where the length of the flter, M+L, s determned by the multpath spread, and the w are determned by some adaptve algorthm based on the flter tap weghts { } receved tranng sequence. Note that causalty s not necessarly requred here, as long as we are prepared to accept a delay on the output at the recever to wat for later symbols to be receved (whch may or may not be the case, dependng on the applcaton). The FIR flter correspondng to the equalser s then H w M ( z) = = L There are three basc characterstcs used to descrbe and classfy practcal adaptve equalsers. The frst characterstc s the type of equalsaton performed, as to whether t s symbol by symbol, nvolvng feedbac of past decsons, or actng on the receved sequence as a whole. The second s how to mplement the equalser flter, usually ether a transverse or lattce structure. The fnal characterstc of an equalser s the adaptve algorthm employed to trac the channel changes. The dagram below llustrates the man types of equalsers. y n w z

4 Begnnng wth the second pont, merely as t s the one for whch the least needs to be sad, The followng dagrams show the FIR flter mplemented as frst a transverse structure and then a lattce structure. The man ssue here s that, whle the lattce structure s a more complex realsaton and recursve structure, t has several practcal advantages over the basc transverse flter mplementaton. The lattce mplementaton of the FIR flter offers superor numercal stablty, wth regard to quantsaton nose and roundng errors from fnte precson arthmetc. It also facltates a faster convergence n the determnaton of the flter tap weghts, and s also more suted to dynamc length assgnment. For ths course we ll magne the equalser flter to be transverse, for algebrac smplcty. Students should be aware, however, that both structures are possble and n a sense equvalent (n the sense that, gven a transverse flter one could construct an equvalent lattce flter wth the same transfer functon).

5 As for equalser types, the lnear symbol by symbol equalser s the smplest to understand. It merely conssts of the FIR flter, mplemented ether n lattce or transverse form, wth coeffcents determned by an adaptve algorthm, operatng to equalse each receved symbol, one at a tme. More sophstcated structures are Maxmum Lelhood Sequence Estmaton (MLSE) and Decson Feedbac Equalsaton (DFE). MLSE s the optmal mplementaton of the equalser structure, and n fact, the optmal structure of a recever n general for a channel wth memory. Rather than try to mplement an nverse equalser flter actng on a symbol by symbol bass, the MLSE nstead maes a decson on the transmtted sequence as a whole. That s, t wats and decdes on a group of symbols together. The bass structure of the MLSE s an teratve estmaton loop, whereby a channel estmator s mplemented to mmc the acton of the channel. The channel estmate and the current symbol estmates can be compared to the receved symbols, and based on the dfference the channel and symbol estmates can be teratvely refned.

6 The maor ssue here s that the MLSE s very computatonally ntensve, partcularly when the channel delay spread s large and so the MLSE must act on a large number of symbols at one tme. Often a MLSE equalser s mplemented along wth the Vterb algorthm to perform the search. We wll soon study ths algorthm n a dfferent guse, to decode convolutonally encoded data. In hgh data rate applcatons the computatonal overheads and delay ntroduced mean that MLSE s often not the chosen equalser type, and smpler, though less optmal, equalser s mplemented. A popular choce of equalser s the Decson Feedbac Equalser (DFE). It s not as complex or computatonally ntensve as the MLSE, though can produce more than adequate performance. The basc dea of the DFE s that the pror estmated symbols can be used, along wth an estmate of the channel to estmate the ISI affectng the current symbol. A feedbac flter can then be used to subtract off ths estmated ISI. The symbol estmate s thus, d = w y N 2 = N = Nether the DFE nor the MLSE suffer from the nose enhancement problem, snce rather than attempt to mplement an nverse channel flter they come at the problem the other way and determne an estmate of the channel and account for the ISI from ths. v dˆ The fnal ssue n equalsaton s the technque used to estmate the adaptve flter tap weghts. These can be done n a sngle teraton, such as by the zero-forcng (ZF) or MMSE deas, or as s most common, by an teratve technque such as the LMS or RLS algorthms. The Zero-forcng (ZF) approach s to precsely determne the best FIR flter approxmaton to the channel nverse flter. It s seldom used n practse, because of the nose enhancement problem, though they are perhaps the easest to understand. The functon of tranng sequence s to allow the recever to estmate the channel mpulse response of the channel, and hence the channel transfer functon H ( z). Ths can be done by a deconvoluton technque or other. The zero-forcng equalser s then defned to be H ZF ( z) = H z ( )

7 Snce ( z) H wll be FIR (even f the channel mpulse response s not fnte, we ll only be able to measure t over a fnte number of symbols anyway), the desred equalser s typcally IIR. Thus the best FIR approxmaton to a gven order s sought. The am s to choose the equalser coeffcents { w L,K, w M } to mnmse or some smlar metrc. H ( z) ( ) 2 L w z + w z M L K + M A more popular soluton, and wth sgnfcantly superor nose performance, s the Mnmum Mean Square Error (MMSE) algorthm. The dea s to choose the equalser coeffcents that produce the smallest square dfference between the equalser output and the nown tranng symbols. It has a lot n common wth the famlar Wener flter from sgnal processng and the Kalman flter from dgtal control theory. n n n n M a vector formed of the prevous M receved symbols correspondng to the nown tranng symbols x, x, K x beng sent down the channel. The equalser flter s of order M, and To construct the MMSE soluton, denote y = [ y, y, K, y ] T { n n, n M } denote a vector formed by ts coeffcents as w = [ w, w, ] T express the flter output n vector notaton, M K w M. Ths allows us to T xˆ n = w yn = w y = The error of the equalser s then the dfference between the actual equalser output and the nown tranng symbol, T e = x xˆ = x w y The natural of quantty of nterest s the mean square error, 2 2 T T MSE = ζ = E[ e ] = E[ x ] 2p w + w Rw where p = E[ x y ] s the correlaton vector, whch measures the commonalty between the receved symbols at each tme wth the current nown tranng symbol, and T R = E[ y y ] s the covarance matrx of the receved symbols. These quanttes could be determned from the statstcal model of the channel, though n practse they are calculated as averages over the receved tranng symbols. We see the equalser coeffcents to mnmse ths mean square error. Tang the vector dervatve of ζ wth respect to w, ζ ζ ζ ζ =,, K, = w w w M gves, ζ = 2Rw 2p = and the optmal equalser, optmal n the MMSE sense, s w = R p MMSE

8 It s possble to show that ths equalser doesn t suffer from the usual nose enhancement problem. For a channel wth transfer functon H ( z), and AWGN nose, t s possble to show that the MMSE soluton has a transfer functon H ( z) = MMSE H z + N ( ) The weaness wth the MMSE approach les n the nverson of the covarance matrx, R. For large equalsers, ths s qute a bg matrx, and the nverson procedure can be very computatonally ntensve and numercally unstable (the rows can be almost lnearly dependent, as they are determned by the sgnal to nose level). Thus, n practse teratve approaches are used to vod the need of matrx nverson. To understand where the teratve approaches are comng from, we can frst see that the selecton of the optmal flter s the really the soluton of a smple, convex, M dmensonal optmsaton problem. We see to choose the set w to mnmse the quadratc form, 2 T T J ( w) = ζ = E[ x ] 2p w + w Rw One way we could fnd the global mnmum would be to move on ths M-dmensonal hyperplane from our current guess towards the global mnmum s to move the drecton of steepest descent. We would then update our equalser coeffcents as [ J ( w )] ( + ) ( ) ( ) w = w + 2 where α s the step sze, whch represents how far we move on each teraton. If α s small we converge slowly, though we fnd the eventually soluton qute accurately. On the other hand, f α s large we move rapdly on the surface but run the danger of nstablty essentally we contnually hop over the desred soluton on successve teratons. α Here, J ( w) = 2 ( Rw p) and the steepest decent algorthm updates the equalser coeffcents teratvely as ( + ) ( ) ( ) w = w + α[ p Rw ] Notce that there s no need for matrx nversons here. In general, though, we do not even need to determne the covarance vector and correlaton matrx, and smpler, more qucly computed approxmatons suffce. A smple and popular teratve algorthm s the Least Mean Square (LMS), whch updates our approxmate soluton n proporton to the current error. The algorthm, whch can be run on an nput symbol by symbol bass, updates the equalser as follows: Iteratve over : ( ) ( ) T. Fnd current equalser output, xˆ = ( w ) y ( ) ( ) 2. Calculate the current error, e = x xˆ ( +) ( ) ( ) 3. and update the equalser coeffcents as w = w + αe y For stablty we requre that the step sze satsfy

9 where { λ } 2 < α < M λ are the egenvalues of the covarance matrx, R. Ths s typcally determned n proporton to the receved sgnal power. = The LMS s qute smple to mplement but ts convergence s very slow. A more sophstcated algorthm s the RLS (Recursve Least Squares). It s desgned to be teratvely mnmse the cumulatve square error, J m ( n) = = λ 2 en where λ s the coeffcent of forgetfulness ( < λ < ). The RLS procedure s: ( ). Intalse, w = ( ) =, and R ( ) = δi M M for some large postve δ. 2. Obtan a new nput sample at a tme, and terate for that sample. Iterate over : ( ) ( ) T 3. Fnd the current output, xˆ = ( w ) y, and the current error, ( ) ( ) e = x xˆ.

10 R( ) y 4. Update the Kalman gans, ( ) = and the nverse λ T + y R y ( ) [ ] T correlaton matrx, R( ) = R( ) ( ) y R( ). λ + 5. Fnally, update the equalser coeffcents, w = w + e ( ) ( ) ( ) The RLS obvously more complex than the LMS, but t s found n practse to converge much more qucly on the optmal soluton. In the selecton of an teratve technque to determne the coeffcents of the adaptve flter, the maor ssues are:. Computatonal complexty: the number of multplcatons to be performed on each teraton. 2. Rate of convergence: how fast the algorthm locs on to the optmal soluton. 3. Msalgnment and error: how closely the algorthm output approaches the optmum soluton, and how robust the soluton s to nose. 4. Numercal propertes: senstvty to fnte precson roundng errors. In general there s a trade off n computatonal complexty and rate of convergence. The table below summarses the typcal performance of popular algorthms. In the table, N represents the number of taps n the equalser. We have not dscussed the last algorthms the nterested reader can fnd the requste nformaton n the correspondng text boos and research papers. Algorthm No. of mux Complexty Convergence Tracng per teraton speed LMS 2N + Low Slow (>N) Poor MMSE 2 3 N to N Very hgh Fast (~N) Good RLS 2.5N N Hgh Fast (~N) Good Fast Kalman DFE 2N + 5 Low Fast (~N) Good Square-root LS DFE.5N N Hgh Fast (~N) Good ( ). Dversty One mght magne that the natural characterstc of the multpath rado channel, to provde the recever wth multple ndependent copes of the data stream would be to our advantage. If one of the components undergoes a deep fade, then t s unlely that the other component would too, f they are really ndependent. Dversty technques am to mae ths a realty dentfy the ndvdual multpath components, and somehow combne them to mprove the performance of our communcaton system. There are two basc aspects of dversty. Mcrodversty consders technques to combat the effects of small-scale fadng. Macrodversty, on the other hand, loos at ways to mtgate the effects of large scale shadowng due to buldngs and other obstructons. Macrodversty s commonly mplemented at the networ level by combnng together the sgnal receved at dfferent base statons. The prncples

11 behnd each type of dversty are the same, however macrodversty s only mplemented at the hgher networ layers. We ll generally focus here on mcrodversty technques the extenson to macrodversty stuatons s easly made. The am of dversty s obtan as many dfferent ndependent versons of the receved sgnal, each called a dversty branch, at the recever as possble. By havng multple ndependent copes of the data sgnal, the probablty of an outage, that s, that our receved sgnal s below the acceptable threshold SNR, wll be reduced. The questons then are: how can we obtan dversty branches n practse? How can we combne together these multple ndependent sgnals effectvely at the recever? What s the performance mprovement that results? It s mportant to frstly apprecate, though, that best performance comes f the branch are ndependent. Thnng n terms of elementary probablty theory, wth two branches, denoted A and B, the probablty that the ont communcaton ln s successful s P( A B) = P( A) + P( B) P( A B) whch s maxmum f A and B are ndependent, so P ( A B) =. There are many ways that dversty can be acheved n a practcal cellular system. Many base statons can use several separate antennae, and as long the antennae are spaced by a suffcent dstance, the sgnals receved at each antenna can be assumed to fade ndependently. Ths s called space dversty. Another soluton s to use polarsed antennae at the base staton, snce the two dfferent polarsaton components of the RF sgnal wll propagate through the rado channel n very dfferent ways. Ths s polarsaton dversty. If we transmt the sgnal at two dfferent frequences separated by more than the channel coherence bandwdth, then these copes of the sgnal wll fade ndependently

12 and we get frequency dversty. Fnally, f we send the same sgnal at two dfferent tmes separated by more than the channel coherence tme we obtan tme dversty. Havng obtaned these ndependent copes of the sgnal at the recever, there are three common ways to combne them to get a stronger resultant sgnal. The frst s called Selecton Combnng (SC), where we smply select the branch wth the strongest SNR to be the output sgnal. All ths requres s that the recever montor and measure the sgnal strength on each branch no complcated co-phasng s requred. The second technque s called Equal Gan Combnng (EGC). Here the recever co-phases the sgnals (compensates for the dfferent tme delays) and sums the sgnals. The fnal technque, Maxmum Rato Combnng (MRC) s the optmal one. In MRC the recever co-phases the sgnals and sums them together, weghtng each branch wth a gan proportonal to the ampltude on the receved sgnal on that branch. All of these technques can be represented wth the same structure. The recever weghts the sgnal on each branch wth a complex gan, θ α = ae where the factor θ does the co-phasng, essentally compensatng for the phase of the sgnal receved on that branch. The output combned sgnal s Ths s llustrated n the dagram below. M = = r () t α r () t There are two man quanttes that are determned to assess the performance mprovement wth dversty. The frst s the average ncrease n output SNR, called the array gan, A g γ = average combned SNR = average branch SNR γ The second s the decrease n the output symbol error rate, called the dversty gan. In general when M dversty branches are used, one fnds that the symbol error rate can be approxmated as

13 P cγ χ s where c depends on the type of modulaton and detecton used, and χ M s called the dversty order. The maxmum dversty order that can be acheved wth M dversty branches s M, when MRC s used. Let s frst consder Selecton Combnng (SC). The output s smply the strongest branch, so the gans are all zero except for that of the strongest branch. Exact expressons for the performance mprovement of ths SC dversty scheme can be obtaned, gven models of the channels on the consttuent branches. The smplest model s to assume each branch s an dentcal and ndependent Raylegh fadng channel, each havng the same mean SNR, γ. As such, the probablty that the SNR on a branch s γ s γ γ P( γ ) = e γ The probablty of outage on a branch, we fnd the probablty that the SNR s below some threshold level γ. P out ( γ γ ) = P( γ ) γ dγ = e γ γ Now the probablty of an outage n the selecton dversty system s the probablty that all branch SNRs are below the threshold level, γ γ ( γ γ ) [ ] M = P out e Ths decreases as M ncreases the dversty gan. We can fnd the probablty dstrbuton for the output SNR by dfferentatng the outage probablty (representng the cumulatve dstrbuton). M γ γ M γ γ P ( γ ) = [ e ] e γ The array gan s then obtaned as the average output SNR, M γ = γ = Note here the dmensonng effect on the output sgnal level of progressvely addng more and more branches. The dversty gan s consderably more dffcult to calculate, as t requres assumptons regardng not only the channel model but also the modulaton and detecton used. Moreover, exact expressons are often not easly obtaned. For the unque case of bnary DPSK wth non-coherent detecton, an exact form can be found. The average BER wth SC wth M branches s M M γ M m Cm Peb = e P ( γ ) dγ = ( ) 2 2 m= + m + γ SC falls well short of full dversty order. A smpler practcal mplementaton of SC s called Threshold combnng. In ths case we only swtch branch f the current output branch drops below a certan threshold.

14 The performance of threshold combnng s very smlar to SC. It s llustrated n the dagram below for two branches. The optmum choce of branch weght s MRC, where each branch weght s proportonal to the sgnal level on that branch. The output s, after co-phasng, r M () t = a r () t where a r () t. Ths s optmal n the sense of maxmsng the output SNR. The analyss s the same as that for Selecton Combnng above, however more 2 algebracally complex, as the output SNR s found to follow a χ -dstrbuton. The man results are: Array gan, γ = Mγ Note that there s no dmnshng returns as we add more dversty branches. Outage probablty, = ( γ ) ( )! M γ γ γ Pout = e = And the general form of the symbol error probablty s M bγ Pes a 2 ndcatng that full dversty order s attaned. Equal Gan Combnng (EGC) weght all of the branches equally, along wth cophasng, a = a2 = K = am. It s performance s only slghtly worse than MRC, and gven ts smplcty, t s a popular choce n practcal systems. The above dscusson has all been for dversty at the recever. It s also possble to consder dversty at the transmtter, where the transmtter attempts to mprove the communcaton ln by transmttng to the recever on dfferent dversty branches (say from separate TX antennae). When the transmtter has nowledge of the state of the channels t has access to the problem s dentcal to the receve dversty schemes we have been consderng above. SC, EGC, or MRC can be used by the transmtter to mprove ln performance.

15 It s stll possble to acheve full dversty order when the channel state nformaton s unnown at the transmtter (but nown at the recever). Ths was demonstrated by Alamout n a famous paper, and s nown as space-tme (bloc) codng (STBC). Space-Tme codng s an mportant aspect of MIMO (Multple Input Multple Output) antenna systems, and s a central component of the planned 4G cellular networs. We wll dscuss these deas n a later chapter. RAKE Recever A very mportant example of dversty prncples appled to a cellular networ s that of the RAKE recever a feature of CDMA networs. By ther very nature drect sequence spread spectrum sgnals are susceptble to ISI caused by mult-pathng, snce the transmtted bandwdth s large compared to the channel coherence bandwdth. However, by selectng spreadng sequences wth low autocorrelaton the multpath echoes wll have lttle mpact on the recovered, de-spread sgnal. The dea n a RAKE recever s to dentfy each of the strongest multpath components by trawlng through the receved sgnal wth tme-shfted versons of the spreadng sequence. Once a multpath component s dentfed t can be co-phased and combned to produce a stronger output sgnal, usng one of the afore descrbed dversty technques (usually MRC, for obvous reasons). Note also that the applcaton of ths RAKE recever maes the CDMA system partcularly well suted to soft, clean hand-offs. Two or more base statons can smultaneously transmt the same sgnal to a MS, and the sgnals from these dfferent base statons wll appear as separate RAKE components and be added to produce a stronger resultant sgnal. Another very mportant example of tme dversty s nterleavng. Interleavng s a systematc re-orderng of the transmtted bt sequence. When combned wth channel codng, whch we wll dscuss n the next secton, ths produces a very effcent and hgh performance communcaton system over the moble fadng envronment. At a fundamental level, we can desgn channel codes that correct random bt errors dstrbuted unformly over the bt stream very effectvely. It s much more dffcult to

16 desgn and mplement channel codes that correct burst errors long consecutve sequences of bt errors, due to channel fadng events. Wth nterleavng we essentally dstrbute the burst errors randomly over the transmtted data stream, enablng the channel code to correct them. The dagram below shows a smple example of a bloc nterleavng scheme on a communcaton channel. The general am of nterleavng s to dstrbute neghbourng bts of the encoded sequence across the transmtted bt stream at tmes separated by greater than the channel coherence tme, so they experence essentally ndependent channels. The trade-off here s, though, that the greater the tme over whch we nterleave the greater the delay at the recever (snce t must weght untl all of the local nterleaved bts have been receved to re-order and then decode). Ths s an ssue when we consder the communcaton of real-tme data, such as voce transmsson n a phone conversaton.

17 Channel Codng The role of channel codng n communcaton s to nsert redundant nformaton nto the transmtter bt stream to facltate the detecton and correcton of errors that naturally occur on transmsson over the harsh rado channel. As llustrated n the dagram below, codng lowers the bt error rate for a gven sgnal to nose rato, sgnfcantly mprovng the ln performance. Channel codng and nformaton theory n general s an enormous feld and we can only attempt a very bref summary here. We wll frstly ntroduce the deas of bloc codes, and n partcular cyclc codes. These are prmarly used for error detecton n cellular and satellte systems. Then we wll treat convolutonal codes and turbo codes, whch are the popular choce to provde error correcton effcently n cellular networs. The basc dea of a bloc code s to map nput symbols nto n output symbols, wth n >, by nsertng n party bts to allow us to detect and correct errors that occur on the channel. We wll restrct our attenton to bnary bloc codes, where both the nput and output alphabets are {,} - the bnary feld. It s possble to conceve bloc codes over non-bnary alphabets, and n fact the sze of the nput and output alphabets need not even be the same. Some of the most mportant bloc codes are constructed over non-bnary alphabets, partcularly the Reed-Solomon famly of burst error correctng codes. Nevertheless, the analyss and results that we obtan for bloc codes s easy to generalse to non-bnary alphabets, but we wll free ourselves of ths added complexty for our presentaton. For our bnary bloc code, our nput bts correspond to 2 possble nput bnary words, and these map to 2 dstnct, unque codewords. We call ths a ( n, ) code, and denote the rato R = n the code rate, representng the fracton by whch our bt

18 stream s expanded on the applcaton of the code, due to the addton of the party n bts. We can magne these codewords as 2 bnary vectors embedded n a 2 bnary space. Communcaton of the codewords over a nosy channel wll naturally result n some the bts beng receved ncorrectly. The error reslence of the bloc code, and ts ablty to correct errors, s fundamentally related to the dstance between codewords n the code space, where the code space s the bnary space of dmenson n, consstng of n 2 dscrete ponts. Bloc encoder x, x 2,,x M xm = [ xm,, K, xm, N ] Channel f Y X y = [ y, K, y N ] Bloc decoder argmax m N n= f Y X ( yn, xm, n) To measure the dstance between code ponts n the code space we ntroduce the Hammng dstance. The Hammng dstance between two codewords merely represents the number of bt postons n whch the two codewords dffer. It s trval to determne the Hammng dstance between two codewords of a bnary code usng the Hammng weght, w( c ), of a bnary vector c. The Hammng weght of the codeword s the number of s n the bnary codeword, c = ( c, c 2,, c ) { c {, } w n ( c ) = c = K n, wth The Hammng dstance between two codewords s then found by tang the Hammng weght of the sum or dfference of the codewords (note that, over the bnary feld, addton and subtracton are equvalent: + = = ; + = = ; + = + = = =.) The addton or subtracton of the two codewords wll yeld a f the codewords agree at that bt poston, and a f they dsagree. d c, c = w c + c = w c c Ham ( ) ( ) ( ) The Hammng dstance of a code s defned to be the mnmum Hammng dstance between any two codewords of the code, d = mn d c, c mn c, c Code Ham ( ) The Hammng dstance of a code ultmately determnes ts ablty to correct errors. We can conceve of a smple conceptual model for decodng and error correcton of the bloc code. We can surround each code vector by a sphere radus t, such that each of these spheres are non-overlappng called Hammng spheres. Our receved bnary word r must be some pont n the code space, and we decode by selectng the codeword c correspondng to that receved vector as the codeword n whose Hammng sphere r les. The Hammng sphere, contanng all ponts wthn a Hammng dstance of t from the codeword, corresponds to all bnary vectors that dffer from the codeword n up to t bt postons. Thus, our code s able to correct t bt errors n the codewords.

19 The mnmum dstance between codewords n the codespace, the Hammng dstance of the code d mn, naturally determnes the number of correctable errors, t. To mae the Hammng spheres dsont we requre d 2t mn + Moreover, we can dstngush between the number of errors a code can correct, t c, and the number of errors a code can detect t d, though not necessarly correct merely that the recever can dentfy that the receved bnary codeword wth t d errors s not tself a codeword. Clearly we must have td tc. The number of detectable errors must satsfy, d mn t d + We could then conceve of a code that could correct t c errors and detect t d errors f and only f t satsfes d 2t and d t d + t mn c + mn c + The Hammng dstance of a code and the number of errors t can correct can be used to put bounds on the sze of the code and the number of party bts requred. The number of ponts wthn the Hammng sphere of a codeword s n n n n n = K+ = 2 t n n as there s the codeword tself, then there are = C ( n choose ) vectors that n n dffer from the codeword n one bt poston, = C2 vectors that dffer from the 2 codeword n exactly two bt postons, etc. There are 2 codewords n the code space, n and as there are 2 ponts n the codespace, we must have t n n 2 2 = The above argument gves our the Hammng Bound on the number of correctable errors for a bloc code, t n n 2 = A code that acheves equalty n the Hammng bound s nown as a perfect code. A perfect code has the property that every pont n the codespace les wthn the Hammng sphere of some codeword. In a sense there are no wasted ponts n the codespace, and the decoder can mae a decson about every sngle possble receved

20 bnary word. In a code that s not perfect, there are some ponts that are equally dstance from two or more codewords, and as such the recever cannot decde on the codeword correspondng to ths receved bnary vector. There are three types of perfect codes: bnary repetton codes, Hammng codes, and the Golay codes. We wll dscuss each of these later. Perfect codes are not of great practcal nterest, however, snce as we sad before the man nterest n codng s to be able to buld large bloc codes whle stll mantanng fnte complexty n the encodng and decodng operatons. Perfect codes are thus not consdered the best codes n practse. d h = n ponts c c 2 d h = 2 d mn = 5 n n + ponts 2 A smple example of a (5,2) bloc code s: Input bnary word Codeword (,) (,,,,) (,) (,,,,) (,) (,,,,) (,) (,,,,) The Hammng dstance for ths code s seen to be d = mn 3, whch means ths code of rate 2/5 can correct a sngle bt error. The code s not perfect, snce < 2 = 8 For example, f the recever obtaned (,,,,) t would decode ths as (,), snce t dffers from the frst codeword n a sngle bt poston, and all other codewords n more than two bt postons. An example of an ambguous receved vector s (,,,,), as t dffers from the frst codeword n two bt postons, but also the fourth codeword n two bt postons. The recever has no way of decdng between the two, and correctng the double error that must have occurred. From a practcal standpont we usually restrct our dscusson of practcal bloc codes to systematc lnear bloc codes. A systematc (n,) code s one for whch the frst bts of the n bt codeword correspond exactly the nput bts. An example of a systematc code s the (5,2) example presented earler. Input bnary word Codeword (,) (,,,,) (,) (,,,,) (,) (,,,,) (,) (,,,,)

21 Notce that the frst two bts of each codeword are the same as the nput bts. Ths systematc property s an mportant structural element n an effcent decoder. The frst operaton of the decoder s to establsh whether or not an error occurred n the codeword. If t dd not then the decoder smply decodes by tang the frst bts of the receved n bts. If an error s detected then an algorthm can be nvoed to dentfy and correct ths error. Ths systematc property, partcularly n stuatons wth low error rates, can lead to consderable savngs n computaton n the decoder. A lnear code has the property that the sum of any two codewords s tself another codeword. If we denote the set of codewords as C, then If c, c C, then c + c C You mght note that the above (5,2) code s also lnear. The frst useful characterstc of a lnear code s the exstence of a generator matrx for the code, G. The generator matrx s an effcent technque for performng encodng, smply multply the nput bnary word b by the generator matrx to obtan the codeword c, c = bg Ths s possble snce a matrx s a lnear mappng. If we tae a bass for the nput bnary space (the obvous one s {(,,, K ), (,,, K),(,,, K), K} ), then an arbtrary nput can be expressed as a lnear combnaton of these bass vectors. Thus, usng the generator matrx the resultant codeword must be lnear combnaton of codewords correspondng to the bass vectors. A generator matrx can easly be constructed by consderng what happens to the bass vectors of the code. For the (5,2) example, the generator matrx s G = smply the codewords correspondng to the nput bnary words (,) and (,). The generator matrx of a systematc code wll always have the form G = ( I P ( n )) where P s nown as the party generatng matrx, snce t tell us how to determne the party bts for the gven codeword. For a lnear bloc code, the Hammng weght of the code s the smallest Hammng weght of a non-zero codeword of the code (snce all sums and dfferences of codewords are themselves other codewords). Moreover, the Hammng weght of the code s the smallest number of s that can be made by any lnear combnaton of rows of the generator matrx. The most mportant feature of a lnear code s the ablty to perform syndrome decodng. To do ths we defne the party chec matrx, P H = I The party chec matrx has the property that t actng on any codeword must result n. y = ch = bgh P = b( I P) I = bp + bp =

22 snce + = and + =, so any bnary vector added to tself s equal to the zero vector. After communcaton over a nosy channel, the receved vector could be wrtten as the transmtted codeword plus an error vector e. The error vector has a at any poston that s n error, and a n all postons that are not n error. r = c + e The decoder acts on the receved codeword wth the party chec matrx H to determne the syndrome, y = rh = ch + eh = eh The syndrome only depends on the error that has occurred. If there was no error then y =, we assume that the codeword was receved correctly and decode our systematc code by tang the frst bts. Otherwse the syndrome dentfes the error that has occurred, ndependent of the codeword sent. A loo-up table s commonly employed to correct the error from the syndrome. For each error that the code can correct, the assocated syndrome s found by multplyng the error by the party chec matrx. The decoder then stores n memory a table of syndromes and the assocated errors, and f t calculates a partcular syndrome t loos ths up n the table, grabs the correspondng error vector, and corrects ths error by addng t to the receved vector (and so swtchng the erroneous bts bac to ther orgnal values). For the (5,2) code we have been consderng, the party chec matrx s H = The code can correct sngle bt errors, so there are 5 error vectors that the code can correct, (,,,,), (,,,,), (,,,,), (,,,,), and (,,,,). The syndromes correspondng to each of these errors are, y = eh, formng the loo-up table, Syndrome Correspondng Error (,,) (,,,,) (,,) (,,,,) (,,) (,,,,) (,,) (,,,,) (,,) (,,,,) If the recever obtaned (,,,,), then the decoder calculates the syndrome as (,,), meanng codeword s correct and there was no error. If, for example, the second bt s n error, then (,,,,) s receved. The syndrome n ths case s (,,). The assocated error vector from the loo-up table s (,,,,), allowng the receved word to be corrected to (,,,,). Ths syndrome decodng wth a loo-up table can be practcally mplemented for reasonably large code szes. The only weaness really s that matrx computatons,

23 even n bnary, can get qute tedous for very large matrces. Next we wll see a way of crcumventng ths problem. The fnal structural aspect we mpose on bloc codes to ease mplementaton s to requre them to be cyclc. Cyclc codes are ones for whch any cyclc permutaton of a codeword s tself another codeword. A smple example of a cyclc code s the (6,2) repetton code, Input bnary word Codeword (,) (,,,,,) (,) (,,,,,) (,) (,,,,,) (,) (,,,,,) Notce that any cyclc shft of a codeword always gves another codeword. For nstance, f we tae and shft all bts one place to the rght, we get the thrd codeword. The mportant thng about cyclc codes s that they facltate a very effcent polynomal representaton. A bnary vector of length n can be mapped to a polynomal whose coeffcents are over the bnary feld of degree n. The coordnates of the bnary vector become the coeffcents of the polynomal. For nstance, (,,,,,) as a bnary polynomal s p + p + p + p + p + p + p + p A cyclc shft of bts could easly be mplemented n the polynomal representaton by multplcaton by an approprate power of p. The attracton of cyclc codes s that ths bnary polynomal representaton naturally maps to mplementaton at a mcroprocessor level, n terms of bt-shft and add operatons. Ths wll be made apparent n the example to follow. Frstly, let s dscuss the encodng and decodng procedures. Cyclc codes are charactersed by a generator polynomal, g ( p), that defnes the rule for how codewords are found from nput bnary words. To generate codewords n a systematc way, the nput bts are represented by a bnary polynomal b ( p). Ths n polynomal s multpled by p, n effect a shft of the bts to the left n places. We then dvde p n b( p) by g ( p ) to fnd the remander polynomal, ρ ( p). The coeffcents of the remander polynomal are the party chec bts for the codeword. The resultant polynomal for the codeword could then be wrtten as n c( p) = p b( p) + ρ( p) Note that ths s a multple of the generator polynomal, snce by the law of dvson, p n b( p) = q( p) g( p) + ρ( p) and all coeffcents are bnary. Thus, f the codeword polynomal s dvded by the generator polynomal the remander s zero. In general, after a nosy channel, the receved polynomal wll dffer from the codeword polynomal by some error polynomal, r ( p) = c( p) + e( p) The remander upon dvdng the receved polynomal by the code generator polynomal must depend only on the error, ndependent of the codeword sent.

24 Syndrome deas and loo-up tables can then be used to fnd the error from ths remander, whch s nown as the syndrome. As an example of the encodng process, consder the (7,4) Hammng code. Ths s a sngle error correctng code, whch can be descrbed by the followng generator 2 g p = + p + p. To encode (,,,), we frst form polynomal, ( ) ( p) = p ( p + p) = p p p n b + The party chec bts are found by dvdng ths by the generator polynomal, 3 2 p + p + p 3 + p 2 + p 3 2 p + p + The remander polynomal s ρ ( p) =. The transmtted codeword s then the remander gvng the party chec bts. 6 + p p + p + p 5 4 p + p + p 5 4 p + p + p 3 2 p + p The above example of bnary polynomal long dvson should mae t clear how easy ths procedure s to mplement n a mcroprocessor usng bt-shft and add operatons. It s now tme to tae a loo at some practcal bloc codng schemes. Hammng Codes: Hammng codes are a class of sngle-error correctng perfect bnary codes. A perfect sngle-error correctng bnary code satsfes n + n = 2 Some (n.) values that satsfy ths relatonshp are (3,), (7,4), (5,), and (3,26), and Hammng codes exst for all of these (n,) values. Constructng generator polynomals for hammng codes reles on the observaton that n g p must be a factor of p +. Fundamentally ths maes the generator polynomal ( ) sure that we can generate codewords, multples of g ( p), that gve syndromes of zero. For example, the (7,4) Hammng code can be constructed as factors of p 7 +. Factorsng we fnd p + = ( + p)( + p + p )( + p + p ) Ths gves us two possble choces for the generator polynomal ether the second or thrd factors. Encodng and decodng Hammng codes typcally follows the syndrome technque descrbed n earler sectons. 3 2

25 Cyclc Redundancy Chec Codes: These codes, nown as CRC codes, are wdely used for error detecton. They have applcatons n communcaton systems, typcally at the physcal layer n conuncton wth ARQ schemes. They are a common feature of many nternatonal standards n many dverse systems, from error detecton n seral communcaton to error detecton n memory readng and wrtng at a hardware level. Some common CRC standards are: Code Generator Polynomal n- CRC-2 code p + p + p + p + p + 2 CRC-6 (USA) p + p + p + 6 CRC-6 (ITU) p + p + p + 6 The frst example, CRC-2 code, s well suted to systems bt on 6-bt words, whle the latter par are suted for 8-bt word systems (or byte based systems), as the CRC bts wll represent two addtonal bytes at the end of the message. The mplementaton of CRC codes s the same as we have addressed n early sectons, though t should be emphassed that these codes are only sutable for error detecton, and cannot perform forward error correcton (they requre Automatc Repeat Request (ARQ)). Ths s commonly ther applcaton n cellular networs. Ther role s to detect falures of the outer convolutonal or turbo code used for error correcton. Bose-Chaudhur-Hocquenghem (BCH) codes: BCH codes are a famly of bnary cyclc codes wth a wde varety of parameter choces, and as such are qute popular. The BCH codes are charactersed by a postve nteger m > 2, wth the ablty to correct t errors wth t < ( 2 ) 2. The assocated parameters are then, Bloc length: n = 2 Number of message bts: n mt Hammng dstance: d 2t mn + BCH codes can correct up to t random errors n a codeword. In fact, Hammng codes are a specal case of BCH codes wth t =. We not go nto how to construct generator polynomals for BCH codes, as t s a lttle too mathematcally nvolved for ths course. One should pont out that most codes of reasonable bloc sze have already been dscovered, and n practse the engneer wll select an exstng code wth the desred parameters and fnd ts generator polynomals n a text boo or research paper. For example, choosng m = 5 we can desgn a code that corrects t = 2 random errors per codeword. The above relatonshps mply at we need a (5,7) bloc code wth d = 5 mn. A generator polynomal for ths code, referrng to the approprate paper, g p = p + p + p + p + ( ) The man structural feature of BCH codes that mae them so popular s that we need not rely on syndrome decodng and the use of loo-up tables. For large bloc szes and large error correctng capablty, the assocated loo-up tables can get very large

26 and be qute mpractcal to mplement. BCH codes lend themselves to two alternatve algorthms for error locaton, and once the error s located correcton for a bnary code s easy. The frst algorthm s nown as the Berleamp-Massey algorthm, whle the second s bult on the famous Eucld s algorthm. The Berleamp-Massey s superor, but due to patent ssues Eucld s algorthm s more wdely used. Reed-Solomon Codes: RS codes are an mportant class of non-bnary BCH codes. RS codes dffer from bnary codes as they map a sequence of symbols to a set of n encoded symbols. The m symbols come from a set of sze 2, where m corresponds to the number of bts per symbol. Alternatvely, we could consder a RS code as mappng m nput bts to nm encoded bts. A popular choce of m s 8, n whch case we can consder a RS code as mappng nput bytes to n encoded bytes, by appendng n party bytes. The RS code can correct t symbol errors, no matter where they occur n the codeword. The mportant parameters for a RS code are: Bloc length: n = 2 symbols Message sze: symbols Party chec sze: n = 2t symbols Hammng dstance: d = 2t symbols mn + For example, for m = 8 and the case of nput bytes, then the bloc length s n = 255 symbols. To correct 6 symbol errors, we need d = mn 33 and fnd = 223. We would requre a (255,223) RS code to correct 6 symbol errors. Notce how close the code rate s to, even for ths large error correctng capacty. RS codes offer partcularly good performance n burst error correcton. In the above example the (255,223) RS code nvolves the transmsson of = 24 bts for a code word. The RS decoder can correct 6 symbol errors, no matter where they occur n the codeword. Suppose these 6 symbol errors are consecutve, correspondng to 6 8 = 28 consecutve bts beng n error. Ths means the RS code can correct bursts of up to 28 consecutve bts. Ths has made RS codes attractve for applcaton n space communcatons, and the above RS code s at the heart of the NASA/ESA deep space codng standard (CCSDS). The weaness wth the (255,223) RS code s, whle t can correct 28 consecutve bt errors, the code s broen by only 7 random bt errors affectng 7 dfferent symbols dstrbuted throughout the codeword. The common way around ths problem s to protect the RS code by concatenatng t wth an nner code to correct these random bt errors. The most common choce for the nner code s a convolutonal code, and ths s the case n the CCSDS standard. In fact, Reed-Solomon codes concatenated wth convolutonal codes are stll the most successful channel codng opton, for a gven computatonal mplementaton complexty, even after the dscovery of LDPC and Turbo codes. The nner convolutonal code protects aganst random bt errors, whle the outer RS code protects aganst burst errors and falures of the nner code. RS codes can be represented and mplemented as polynomals over a non-bnary feld. For the case above, the coeffcents of the polynomal are taen from the Galos feld GF[256]. Effcent encoder structures have been desgned for these codes for a varety

27 of mcroprocessor archtectures. The decodng technque for RS codes s also very effcent, and follows a smlar procedure to that of BCH codes, wth the mportant dstncton beng once agan that these operate over non-bnary felds. Convolutonal Codes An (n,,m) convolutonal encoder maps nput bts to n output bts, usng the m prevous nputs to the encoder. The dfference between ths and tradtonal bloc codes the use of the memory bts, whose extent s encapsulated by m. The general form of a convolutonal encoder s shown n the dagram below. () b l () bl bl = M ( ) bl memory () g [] () g [] ( ) g [] b l () b l () bl = M ( ) bl () b l m () bl m b bl m = l 2 b l m+ M ( ) bl m () () g [] g [ m] () () g [] g [ m] ( ) ( ) g [] g [ ] memory memory m () c l () g [] () g [] ( ) g2 [] () g [] () g [] ( ) g2 [] () [ g m] () g [ m] ( ) [ ] g 2 m () c l c c M c () l () l ( n ) l = cl () g n [] () g n [] ( ) g n [] () g n [] () g n [] ( ) g n [] () g n [ m] () g n [ m] ( ) g [ m] n ( n ) c l We represent the nput bts by a vector b = ( ) ( 2) ( ) ( b, b, K, b ), then the output () ( 2) ( n) c = ( c, c, K, c ) s determned by the last m nput vectors, b b,..., b m,. The encoder thus needs to remember m nput bts, whch at a basc level could be mplemented usng m nput bts. The code rate of the encoder s n. The code s lnear, as the mappng s performed modulo two, usng a set of weghts () ( 2) ( ) () l { g p = ( g p, g p, K, g p ) g p {, } to determne the contrbuton of the pth prevous ( ) nput vector, b p, to the th output, c, c ( ) () l () l = m p= l= It s evdent that the sum of two nput sgnals produces an output that s the sum of the two ndvdual outputs, and that an all-zero nput produces and all zero output. The above expresson also maes t clear where the term convolutonal encoder comes from, as the mplementaton s very smlar to dscrete tme convoluton g h = g h ). ( ( ) g p b p

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations

Application of Nonbinary LDPC Codes for Communication over Fading Channels Using Higher Order Modulations Applcaton of Nonbnary LDPC Codes for Communcaton over Fadng Channels Usng Hgher Order Modulatons Rong-Hu Peng and Rong-Rong Chen Department of Electrcal and Computer Engneerng Unversty of Utah Ths work

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

High-Speed Decoding of the Binary Golay Code

High-Speed Decoding of the Binary Golay Code Hgh-Speed Decodng of the Bnary Golay Code H. P. Lee *1, C. H. Chang 1, S. I. Chu 2 1 Department of Computer Scence and Informaton Engneerng, Fortune Insttute of Technology, Kaohsung 83160, Tawan *hpl@fotech.edu.tw

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

Hopfield Training Rules 1 N

Hopfield Training Rules 1 N Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Department of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification

Department of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification Desgn Project Specfcaton Medan Flter Department of Electrcal & Electronc Engneeng Imperal College London E4.20 Dgtal IC Desgn Medan Flter Project Specfcaton A medan flter s used to remove nose from a sampled

More information

Chapter 6. BCH Codes

Chapter 6. BCH Codes Wreless Informaton Transmsson System Lab Chapter 6 BCH Codes Insttute of Communcatons Engneerng Natonal Sun Yat-sen Unversty Outlne Bnary Prmtve BCH Codes Decodng of the BCH Codes Implementaton of Galos

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

CSE4210 Architecture and Hardware for DSP

CSE4210 Architecture and Hardware for DSP 4210 Archtecture and Hardware for DSP Lecture 1 Introducton & Number systems Admnstratve Stuff 4210 Archtecture and Hardware for DSP Text: VLSI Dgtal Sgnal Processng Systems: Desgn and Implementaton. K.

More information

FAST CONVERGENCE ADAPTIVE MMSE RECEIVER FOR ASYNCHRONOUS DS-CDMA SYSTEMS

FAST CONVERGENCE ADAPTIVE MMSE RECEIVER FOR ASYNCHRONOUS DS-CDMA SYSTEMS Électronque et transmsson de l nformaton FAST CONVERGENCE ADAPTIVE MMSE RECEIVER FOR ASYNCHRONOUS DS-CDMA SYSTEMS CĂLIN VLĂDEANU, CONSTANTIN PALEOLOGU 1 Key words: DS-CDMA, MMSE adaptve recever, Least

More information

Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes

Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes Automatc Control and Informaton Scences, 04, Vol., No., 7- Avalable onlne at http://pubs.scepub.com/acs/// Scence and Educaton Publshng DOI:0.69/acs--- Decodng of the rple-error-correctng Bnary Quadratc

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Tutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant

Tutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant Tutoral 2 COMP434 ometrcs uthentcaton Jun Xu, Teachng sstant csjunxu@comp.polyu.edu.hk February 9, 207 Table of Contents Problems Problem : nswer the questons Problem 2: Power law functon Problem 3: Convoluton

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

1 Generating functions, continued

1 Generating functions, continued Generatng functons, contnued. Generatng functons and parttons We can make use of generatng functons to answer some questons a bt more restrctve than we ve done so far: Queston : Fnd a generatng functon

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem. Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Société de Calcul Mathématique SA

Société de Calcul Mathématique SA Socété de Calcul Mathématque SA Outls d'ade à la décson Tools for decson help Probablstc Studes: Normalzng the Hstograms Bernard Beauzamy December, 202 I. General constructon of the hstogram Any probablstc

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Iterative Multiuser Receiver Utilizing Soft Decoding Information

Iterative Multiuser Receiver Utilizing Soft Decoding Information teratve Multuser Recever Utlzng Soft Decodng nformaton Kmmo Kettunen and Tmo Laaso Helsn Unversty of Technology Laboratory of Telecommuncatons Technology emal: Kmmo.Kettunen@hut.f, Tmo.Laaso@hut.f Abstract

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Section 3.6 Complex Zeros

Section 3.6 Complex Zeros 04 Chapter Secton 6 Comple Zeros When fndng the zeros of polynomals, at some pont you're faced wth the problem Whle there are clearly no real numbers that are solutons to ths equaton, leavng thngs there

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

7. Products and matrix elements

7. Products and matrix elements 7. Products and matrx elements 1 7. Products and matrx elements Based on the propertes of group representatons, a number of useful results can be derved. Consder a vector space V wth an nner product ψ

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

TLCOM 612 Advanced Telecommunications Engineering II

TLCOM 612 Advanced Telecommunications Engineering II TLCOM 62 Advanced Telecommuncatons Engneerng II Wnter 2 Outlne Presentatons The moble rado sgnal envronment Combned fadng effects and nose Delay spread and Coherence bandwdth Doppler Shft Fast vs. Slow

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION 1 2 MULTIPLIERLESS FILTER DESIGN Realzaton of flters wthout full-fledged multplers Some sldes based on support materal by W. Wolf for hs book Modern VLSI Desgn, 3 rd edton. Partly based on followng papers:

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Graph Reconstruction by Permutations

Graph Reconstruction by Permutations Graph Reconstructon by Permutatons Perre Ille and Wllam Kocay* Insttut de Mathémathques de Lumny CNRS UMR 6206 163 avenue de Lumny, Case 907 13288 Marselle Cedex 9, France e-mal: lle@ml.unv-mrs.fr Computer

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

a b a In case b 0, a being divisible by b is the same as to say that

a b a In case b 0, a being divisible by b is the same as to say that Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys

More information

DC-Free Turbo Coding Scheme Using MAP/SOVA Algorithms

DC-Free Turbo Coding Scheme Using MAP/SOVA Algorithms Proceedngs of the 5th WSEAS Internatonal Conference on Telecommuncatons and Informatcs, Istanbul, Turkey, May 27-29, 26 (pp192-197 DC-Free Turbo Codng Scheme Usng MAP/SOVA Algorthms Prof. Dr. M. Amr Mokhtar

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information