Channel Encoder. Channel. Figure 7.1: Communication system
|
|
- Edwina Sutton
- 5 years ago
- Views:
Transcription
1 Chapter 7 Processes The model of a communcaton system that we have been developng s shown n Fgure 7.. Ths model s also useful for some computaton systems. The source s assumed to emt a stream of symbols. The channel may be a physcal channel between dfferent ponts n space, or t may be a memory whch stores nformaton for retreval at a later tme, or t may be a computaton n whch the nformaton s processed n some way. Source Encoder Compressor Channel Encoder Channel Input Output (Symbols) (Symbols) Channel Decoder Expander Source Decoder Fgure 7.: Communcaton system Fgure 7. shows the module nputs and outputs and how they are connected. A dagram lke ths s a useful overvew of the operaton of a system, but other representatons are also useful. In ths chapter we develop two abstract models that are general enough to represent each of these boxes n Fgure 7., but show the flow of nformaton quanttatvely. Because each of these boxes n Fgure 7. processes nformaton n some way, t s called a processor and what t does s called a process. The processes we consder here are Dscrete: The nputs are members of a set of mutually exclusve possbltes, only one of whch occurs at a tme, and the output s one of another dscrete set of mutually exclusve events. Fnte: The set of possble nputs s fnte n number, as s the set of possble outputs. Memoryless: The process acts on the nput at some tme and produces an output based on that nput, gnorng any pror nputs. Author: Paul Penfeld, Jr. Ths document: Verson.6, March, 200. Copyrght c 200 Massachusetts Insttute of Technology Start of notes back next 6.050J/2.0J home page Ste map Search About ths document Comments and nqures 82
2 7. Types of Process Dagrams 83 Nondetermnstc: The process may produce a dfferent output when presented wth the same nput a second tme (the model s also vald for determnstc processes). Because the process s nondetermnstc the output may contan random nose. Lossy: It may not be possble to see the nput from the output,.e., determne the nput by observng the output. Such processes are called lossy because knowledge about the nput s lost when the output s created (the model s also vald for lossless processes). 7. Types of Process Dagrams Dfferent dagrams of processes are useful for dfferent purposes. The four we use here are all recursve, meanng that a process may be represented n terms of other, more detaled processes of the same sort, nterconnected. Conversely, two or more connected processes may be represented by a sngle hgher-level process wth some of the detaled nformaton suppressed. The processes represented can be ether determnstc (noseless) or nondetermnstc (nosy), and ether lossless or lossy. Block Dagram: Fgure 7. (prevous page) s a block dagram. It shows how the processes are connected, but very lttle about how the processes acheve ther purposes, or how the connectons are made. It s useful for vewng the system at a hghly abstract level. An nterconnecton n a block dagram can represent many bts. Crcut Dagram: If the system s made of logc gates, a useful dagram s one showng such gates nterconnected. For example, Fgure 7.2 s an AND gate. Each nput and output represents a wre wth a sngle logc value, wth, for example, a hgh voltage representng and a low voltage 0. The number of possble bt patterns of a logc gate s greater than the number of physcal wres; each wre could have two possble voltages, so for n-nput gates there would be 2 n possble nput states. Often, but not always, the components n logc crcuts are determnstc. Probablty Dagram: A process wth n sngle-bt nputs and m sngle-bt outputs can be modeled by the probabltes relatng the 2 n possble nput bt patterns and the 2 m possble output patterns. For example, Fgure 7.3 (next page) shows a gate wth two nputs (four bt patterns) and one output. An example of such a gate s the AND gate, and ts probablty model s shown n Fgure 7.4. Probablty dagrams are dscussed further n Secton 7.2. Informaton Dagram: A dagram that shows explctly the nformaton flow between processes s useful. In order to handle processes wth nose or loss, the nformaton assocated wth them can be shown. Informaton dagrams are dscussed further n Secton 7.6. Fgure 7.2: Crcut dagram of an AND gate 7.2 Probablty Dagrams The probablty model of a process wth n nputs and m outputs, where n and m are ntegers, s shown n Fgure 7.5. The n nput states are mutually exclusve, as are the m output states. If ths process s mplemented by logc gates the nput would need at least log 2 (n) but not as many as n wres.
3 7.2 Probablty Dagrams Fgure 7.3: Probablty model of a two-nput one-output gate Fgure 7.4: Probablty model of an AND gate Ths model for processes s conceptually smple and general. It works well for processes wth a small number of bts. It was used for the bnary channel n Chapter 6. Unfortunately, the probablty model s awkward when the number of nput bts s moderate or large. The reason s that the nputs and outputs are represented n terms of mutually exclusve sets of events. If the events descrbe sgnals on, say, fve wres, each of whch can carry a hgh or low voltage sgnfyng a boolean or 0, there would be 32 (2 5 ) possble events. It s much easer to draw a logc gate, wth fve nputs representng physcal varables, than a probablty process wth 32 nput states. Ths exponental exploson of the number of possble nput states gets even more severe when the process represents the evoluton of the state of a physcal system wth a large number of atoms. For example, the number of molecules n a mole of gas s Avogadro s number N A = If each atom had just one assocated boolean varable, there would be 2 N A states, far greater than the number of partcles n the unverse. And there would not be tme to even lst all the partcles, much less do any calculatons: the number of mcroseconds snce the bg bang s less than Despte ths lmtaton, the probablty dagram model s useful conceptually. Let s revew the fundamental deas n communcatons, ntroduced n Chapter 6, n the context of such dagrams. We assume that each possble nput state of a process can lead to one or more output state. For each n nputs m outputs Fgure 7.5: Probablty model
4 7.2 Probablty Dagrams 85 nput denote the probablty that ths nput leads to the output j as c j. These transton probabltes c j can be thought of as a table, or matrx, wth as many columns as there are nput states, and as many rows as output states. We wll use as an ndex over the nput states and j over the output states, and denote the event assocated wth the selecton of nput as A and the event assocated wth output j as B j. The transton probabltes are propertes of the process, and do not depend on the nputs to the process. The transton probabltes le between 0 and, and for each ther sum over the output ndex j s, snce for each possble nput event exactly one output event happens. If the number of nput states s the same as the number of output states then c j s a square matrx; otherwse t has more columns than rows or vce versa. 0 c j (7.) = j c j (7.2) Ths descrpton has great generalty. It apples to a determnstc process (although t may not be the most convenent a truth table gvng the output for each of the nputs s usually smpler to thnk about). For such a process, each column of the c j matrx contans one element that s and all the other elements are 0. It also apples to a nondetermnstc channel (.e., one wth nose). It apples to the source encoder and decoder, to the compressor and expander, and to the channel encoder and decoder. It apples to logc gates and to devces whch perform arbtrary memoryless computaton (sometmes called combnatonal logc n dstncton to sequental logc whch can nvolve pror states). It even apples to transtons taken by a physcal system from one of ts states to the next. It apples f the number of output states s greater than the number of nput states (for example channel encoders) or less (for example channel decoders). If a process nput s determned by random events A wth probablty dstrbuton p(a ) then the varous other probabltes can be calculated. The condtonal output probabltes, condtoned on the nput, are The uncondtonal probablty of each output p(b j ) s p(b j A ) = c j (7.3) p(b j ) = c j p(a ) (7.4) Fnally, the jont probablty of each nput wth each output p(a, B j ) and the backward condtonal probabltes p(a B j ) can be found usng Bayes Theorem: 7.2. Example: AND Gate p(a, B j ) = p(b j )p(a B j ) (7.5) = p(a )p(b j A ) (7.6) = p(a )c j (7.7) The AND gate s determnstc (t has no nose) but s lossy, because knowledge of the output s not generally enough to nfer the nput. The transton matrx s [ ] [ ] c0(00) c 0(0) c 0(0) c 0() 0 = (7.8) c (00) c (0) c (0) c () The probablty model for ths gate s shown n Fgure 7.4.
5 7.2 Probablty Dagrams 86 =0 () j=0 =0 ε ε j=0 = () j= = ε 7 ε j= (a) No errors (b) Symmetrc errors Fgure 7.6: Probablty models for error-free bnary channel and symmetrc bnary channel Example: Bnary Channel The bnary channel s well descrbed by the probablty model. Its propertes, many of whch were dscussed n Chapter 6, are summarzed below. Consder frst a noseless bnary channel whch, when presented wth one of two possble nput values 0 or, transmts ths value fathfully to ts output. Ths s a very smple example of a dscrete memoryless process. We represent ths channel by a probablty model wth two nputs and two outputs. To ndcate the fact that the nput s replcated fathfully at the output, the nner workngs of the box are revealed, n Fgure 7.6(a), n the form of two paths, one from each nput to the correspondng output, and each labeled by the probablty (). The transton matrx for ths channel s [ ] [ ] c00 c 0 0 = (7.9) c 0 c 0 The nput nformaton I for ths process s bt f the two values are equally lkely, or f p(a 0 ) p(a ) the nput nformaton s ( ) ( ) I = p(a 0 ) log 2 + p(a ) log p(a 0 ) 2 (7.0) p(a ) The output nformaton J has a smlar formula, usng the output probabltes p(b 0 ) and p(b ). Snce the nput and output are the same n ths case, t s always possble to nfer the nput when the output has been observed. The amount of nformaton out, J, s the same as I, the amount n: J = I. Ths noseless channel s effectve for ts ntended purpose, whch s to permt the recever, at the output, to nfer the value at the nput. Next, let us suppose that ths channel occasonally makes errors. Thus f the nput s the output s not always, but wth the bt error probablty ε s flpped to the wrong value 0, and hence s correct only wth probablty ε. Smlarly, for the nput of 0, the probablty of error s ε. Then the transton matrx s [ ] [ ] c00 c 0 ε ε = (7.) c 0 c ε ε Ths model, wth random behavor, s sometmes called the Symmetrc Bnary Channel (SBC), symmetrc n the sense that the errors n the two drectons (from 0 to and vce versa) are equally lkely. The probablty dagram for ths channel s shown n Fgure 7.6(b), wth two paths leavng from each nput, and two paths convergng on each output. Clearly the errors n the SBC ntroduce some uncertanty nto the output over and above the uncertanty that s present n the nput sgnal. Intutvely, we can say that nose has been added, so that the output s composed n part of desred sgnal and n part of nose. Or we can say that some of our nformaton s lost n the channel. Both of these effects have happened, but as we wll see they are not always related; t s
6 7.3 Informaton, Loss, and Nose 87 possble for processes to ntroduce nose but have no loss, or vce versa. In Secton 7.3 we wll calculate the amount of nformaton lost or ganed because of nose or loss, n bts. Loss of nformaton happens because t s no longer possble to tell wth certanty what the nput sgnal s, when the output s observed. Loss shows up n drawngs lke Fgure 7.6(b) where two or more paths converge on the same output. Nose happens because the output s not determned precsely by the nput. Nose shows up n drawngs lke Fgure 7.6(b) where two or more paths dverge from the same nput. Despte nose and loss, however, some nformaton can be transmtted from the nput to the output (.e., observaton of the output can allow one to make some nferences about the nput). We now return to our model of a general dscrete memoryless nondetermnstc lossy process, and derve formulas for nose, loss, and nformaton transfer (whch wll be called mutual nformaton ). We wll then come back to the symmetrc bnary channel and nterpret these formulas. 7.3 Informaton, Loss, and Nose For the general dscrete memoryless process, useful measures of the amount of nformaton presented at the nput and the amount transmtted to the output can be defned. We suppose the process state s represented by random events A wth probablty dstrbuton p(a ). The nformaton at the nput I s the same as the entropy of ths source. (We have chosen to use the letter I for nput nformaton not because t stands for nput or nformaton but rather for the ndex that goes over the nput probablty dstrbuton. The output nformaton wll be denoted J for a smlar reason.) I = ( ) p(a ) log 2 (7.2) p(a ) Ths s the amount of uncertanty we have about the nput f we do not know what t s, or before t has been selected by the source. A smlar formula apples at the output. The output nformaton J can also be expressed n terms of the nput probablty dstrbuton and the channel transton matrx: J = j = j ( ) p(b j ) log 2 p(b j ) ( ) ( ) c j p(a ) log 2 c jp(a ) (7.3) Note that ths measure of nformaton at the output J refers to the dentty of the output state, not the nput state. It represents our uncertanty about the output state before we dscover what t s. If our objectve s to determne the nput, J s not what we want. Instead, we should ask about the uncertanty of our knowledge of the nput state. Ths can be expressed from the vantage pont of the output by askng about the uncertanty of the nput state gven one partcular output state, and then averagng over those states. Ths uncertanty, for each j, s gven by a formula lke those above but usng the reverse condtonal probabltes p(a B j ) ( ) p(a B j ) log 2 (7.4) p(a B j ) Then your average uncertanty about the nput after learnng the output s found by computng the average over the output probablty dstrbuton,.e., by multplyng by p(b j ) and summng over j
7 7.3 Informaton, Loss, and Nose 88 L = j = j p(b j ) ( ) p(a B j ) log 2 p(a B j ) ( ) p(a, B j ) log 2 p(a B j ) (7.5) Note that the second formula uses the jont probablty dstrbuton p(a, B j ). We have denoted ths average uncertanty by L and wll call t loss. Ths term s approprate because t s the amount of nformaton about the nput that cannot be found by examnng the output state; n ths sense t got lost n the transton from nput to output. In the specal case that the process allows the nput state to be dentfed unquely for all possble output states, the process s lossless and, as you would expect, L = 0. It was proved n Chapter 6 that L I or, n words, that the uncertanty after learnng the output s less than (or perhaps equal to) the uncertanty before. Ths result was proved usng the Gbbs nequalty. The amount of nformaton we learn about the nput state upon beng told the output state s our uncertanty before beng told, whch s I, less our uncertanty after beng told, whch s L. We have just shown that ths amount cannot be negatve, snce L I. As n Chapter 6, we denote the amount we have learned as M = I L, and call ths the mutual nformaton between nput and output. Ths s an mportant quantty because t s the amount of nformaton that gets through the process. To recaptulate the relatons among these nformaton quanttes: I = ( ) p(a ) log 2 (7.6) p(a ) L = p(b j ) ( ) p(a B j ) log 2 (7.7) p(a j B j ) M = I L (7.8) 0 M I (7.9) 0 L I (7.20) Processes wth outputs that can be produced by more than one nput have loss. These processes may also be nondetermnstc, n the sense that one nput state can lead to more than one output state. The symmetrc bnary channel wth loss s an example of a process that has loss and s also nondetermnstc. However, there are some processes that have loss but are determnstc. An example s the AND logc gate, whch has four mutually exclusve nputs and two outputs 0 and. Three of the four nputs lead to the output 0. Ths gate has loss but s perfectly determnstc because each nput state leads to exactly one output state. The fact that there s loss means that the AND gate s not reversble. There s a quantty smlar to L that characterzes a nondetermnstc process, whether or not t has loss. The output of a nondetermnstc process contans varatons that cannot be predcted from knowng the nput, that behave lke nose n audo systems. We wll defne the nose N of a process as the uncertanty n the output, gven the nput state, averaged over all nput states. It s very smlar to the defnton of loss, but wth the roles of nput and output reversed. Thus N = = p(a ) j p(a ) j ( ) p(b j A ) log 2 p(b j A ) ( ) c j log 2 c j (7.2)
8 7.4 Determnstc Examples 89 Steps smlar to those above for loss show analogous results. What may not be obvous, but can be proven easly, s that the mutual nformaton M plays exactly the same sort of role for nose as t does for loss. The formulas relatng nose to other nformaton measures are lke those for loss above, where the mutual nformaton M s the same: J = ( ) p(b j ) log 2 (7.22) p(b j ) N = p(a ) ( ) c j log 2 (7.23) c j j M = J N (7.24) 0 M J (7.25) It follows from these results that 0 N J (7.26) 7.3. Example: Symmetrc Bnary Channel J I = N L (7.27) For the SBC wth bt error probablty ε, these formulas can be evaluated, even f the two nput probabltes p(a 0 ) and p(a ) are not equal. If they happen to be equal (each 0.5), then the varous nformaton measures for the SBC n bts are partcularly smple: I = bt (7.28) L = N = ε log 2 ( ε M = ε log 2 ( ε J = bt (7.29) ) + ( ε) log 2 ( ) ε ) ) ( ε) log 2 ( ε (7.30) (7.3) The errors n the channel have destroyed some of the nformaton, n the sense that they have prevented an observer at the output from knowng wth certanty what the nput s. They have thereby permtted only the amount of nformaton M = I L to be passed through the channel to the output. 7.4 Determnstc Examples Ths probablty model apples to any system wth mutually exclusve nputs and outputs, whether or not the transtons are random. If all the transton probabltes c j are equal to ether 0 or, then the process s determnstc. A smple example of a determnstc process s the N OT gate, whch mplements Boolean negaton. If the nput s the output s 0 and vce versa. The nput and output nformaton are the same, I = J and there s no nose or loss: N = L = 0. The nformaton that gets through the gate s M = I. See Fgure 7.7(a). A slghtly more complex determnstc process s the exclusve or, XOR gate. Ths s a Boolean functon of two nput varables and therefore there are four possble nput values. When the gate s represented by
9 7.4 Determnstc Examples (a) NOT gate (b) XOR gate Fgure 7.7: Probablty models of determnstc gates a crcut dagram, there are two nput wres representng the two nputs. When the gate s represented as a dscrete process usng a probablty dagram lke Fgure 7.7(b), there are four mutually exclusve nputs and two mutually exclusve outputs. If the probabltes of the four nputs are each 0.25, then I = 2 bts, and the two output probabltes are each 0.5 so J = bt. There s therefore bt of loss, and the mutual nformaton s bt. The loss arses from the fact that two dfferent nputs produce the same output; for example f the output s observed the nput could be ether 0 or 0. There s no nose ntroduced nto the output because each of the transton parameters s ether 0 or,.e., there are no nputs wth multple transton paths comng from them. Other, more complex logc functons can be represented n smlar ways. However, for logc functons wth n physcal nputs, a probablty dagram s awkward f n s larger than 3 or 4 because the number of nputs s 2 n Error Correctng Example The Hammng Code encoder and decoder can be represented as dscrete processes n ths form. Consder the (3,, 3) code, otherwse known as trple redundancy. The encoder has one -bt nput (2 values) and a 3-bt output (8 values). The nput s wred drectly to the output and the nput 0 to the output 000. The other sx outputs are not connected, and therefore occur wth probablty 0. See Fgure 7.8(a). The encoder has N = 0, L = 0, and M = I = J. Note that the output nformaton s not three bts even though three physcal bts are used to represent t, because of the ntentonal redundancy. The output of the trple redundancy encoder s ntended to be passed through a channel wth the possblty of a sngle bt error n each block of 3 bts. Ths nosy channel can be modelled as a nondetermnstc process wth 8 nputs and 8 outputs, Fgure 7.8(b). Each of the 8 nputs s connected wth a (presumably) hgh-probablty connecton to the correspondng output, and wth low probablty connectons to the three other values separated by Hammng dstance. For example, the nput 000 s connected only to the outputs 000 (wth hgh probablty) and 00, 00, and 00 each wth low probablty. Ths channel ntroduces nose snce there are multple paths comng from each nput. In general, when drven wth arbtrary bt patterns, there s also loss. However, when drven from the encoder of Fgure 7.8(a), the loss s 0 bts because only two of the eght bt patterns have nonzero probablty. The nput nformaton to the nosy channel s bt and the output nformaton s greater than bt because of the added nose. Ths example demonstrates that the value of both nose and loss depend on both the physcs of the channel and the probabltes of the nput sgnal. The decoder, used to recover the sgnal orgnally put nto the encoder, s shown n Fgure 7.8(c). The transton parameters are straghtforward each nput s connected to only one output. The decoder has loss (snce multple paths converge on each of the ouputs) but no nose (snce each nput goes to only one output).
10 7.5 Capacty (a) Encoder (b) Channel (c) Decoder Fgure 7.8: Trple redundancy error correcton 7.5 Capacty In Chapter 6 of these notes, the channel capacty was defned. Ths concept can be generalzed to other processes. Call W the maxmum rate at whch the nput state of the process can be detected at the output. Then the rate at whch nformaton flows through the process can be as large as W M. However, ths product depends on the nput probablty dstrbuton p(a ) and hence s not a property of the process tself, but on how t s used. A better defnton of process capacty s found by lookng at how M can vary wth dfferent nput probablty dstrbutons. Select the largest mutual nformaton for any nput probablty dstrbuton, and call that M max. Then the process capacty C s defned as C = W M max (7.32) It s easy to see that M max cannot be arbtrarly large, snce M I and I log 2 n where n s the number of dstnct nput states. In the example of symmetrc bnary channels, t s not dffcult to show that the probablty dstrbuton that maxmzes M s the one wth equal probablty for each of the two nput states. 7.6 Informaton Dagrams An nformaton dagram s a representaton of one or more processes explctly showng the amount of nformaton passng among them. It s a useful way of representng the nput, output, and mutual nformaton and the nose and loss. Informaton dagrams are at a hgh level of abstracton and do not dsplay the detaled probabltes that gve rse to these nformaton measures. It has been shown that all fve nformaton measures, I, J, L, N, and M are nonnegatve. It s not necessary that L and N be the same, although they are for the symmetrc bnary channel whose nputs have equal probabltes for 0 and. It s possble to have processes wth loss but no nose (e.g., the XOR gate), or nose but no loss (e.g., the nosy channel for trple redundancy). It s convenent to thnk of nformaton as a physcal quantty that s transmtted through ths process much the way physcal materal may be processed n a producton lne. The materal beng produced comes nto the manufacturng area, and some s lost due to errors or other causes, some contamnaton may be added (lke nose) and the output quantty s the nput quantty, less the loss, plus the nose. The useful product s the nput mnus the loss, or alternatvely the output mnus the nose. The flow of nformaton through a dscrete memoryless process s shown usng ths paradgm n Fgure 7.9. An nterestng queston arses. Probabltes depend on your current state of knowledge, and one observer s knowledge may be dfferent from another s. Ths means that the loss, the nose, and the nformaton transmtted are all observer-dependent. Is t OK that mportant engneerng quanttes lke nose and loss depend on who you are and what you know? If you happen to know somethng about the nput that
11 7.7 Cascaded Processes 92 N nose I nput M J output L loss Fgure 7.9: Informaton flow n a dscrete memoryless process your colleague does not, s t OK for your desgn of a nondetermnstc process to be dfferent, and to take advantage of your knowledge? Ths queston s somethng to thnk about; there are tmes when your knowledge, f correct, can be very valuable n smplfyng desgns, but there are other tmes when t s prudent to desgn usng some worst-case assumpton of nput probabltes so that n case the nput does not conform to your assumed probabltes your desgn stll works. Informaton dagrams are not often used for communcaton systems. There s usually no need to account for the nose sources or what happens to the lost nformaton. However, such dagrams are useful n domans where nose and loss cannot occur. One example s reversble computng, a style of computaton n whch the entre process can, n prncple, be run backwards. Another example s quantum communcatons, where nformaton cannot be dscarded wthout affectng the envronment Notaton Dfferent authors use dfferent notaton for the quanttes we have here called I, J, L, N, and M. In hs orgnal paper Shannon called the nput probablty dstrbuton x and the output dstrbuton y. The nput nformaton I was denoted H(x) and the output nformaton J was H(y). The loss L (whch Shannon called equvocaton ) was denoted H y (x) and the nose N was denoted H x (y). The mutual nformaton M was denoted R. Shannon used the word entropy to refer to nformaton, and most authors have followed hs lead. Frequently nformaton quanttes are denoted by I, H, or S, often as functons of probablty dstrbutons, or ensembles. In physcs entropy s often denoted S. Another common notaton s to use A to stand for the nput probablty dstrbuton, or ensemble, and B to stand for the output probablty dstrbuton. Then I s denoted I(A), J s I(B), L s I(A B), N s I(B A), and M s I(A; B). If there s a need for the nformaton assocated wth A and B jontly (as opposed to condtonally) t can be denoted I(A, B) or I(AB). 7.7 Cascaded Processes Consder two processes n cascade. Ths term refers to havng the output from one process serve as the nput to another process. Then the two cascaded processes can be modeled as one larger process, f the nternal states are hdden. We have seen that dscrete memoryless processes are characterzed by values of I, J, L, N, and M. Fgure 7.0(a) shows a cascaded par of processes, each characterzed by ts own parameters. Of course the parameters of the second process depend on the nput probabltes t encounters, whch are determned by the transton probabltes (and nput probabltes) of the frst process. But the cascade of the two processes s tself a dscrete memoryless process and therefore should have ts own fve parameters, as suggested n Fgure 7.0(b). The parameters of the overall model can be calculated
12 7.7 Cascaded Processes 93 N N N 2 I J =I 2 J 2 I J M M 2 M L L 2 L (a) The two processes (b) Equvalent sngle process Fgure 7.0: Cascade of two dscrete memoryless processes ether of two ways. Frst, the transton probabltes of the overall process can be found from the transton probabltes of the two models that are connected together; n fact the matrx of transton probabltes s merely the matrx product of the two transton probablty matrces for process and process 2. All the parameters can be calculated from ths matrx and the nput probabltes. The other approach s to seek formulas for I, J, L, N, and M of the overall process n terms of the correspondng quanttes for the component processes. Ths s trval for the nput and output quanttes: I = I and J = J 2. However, t s more dffcult for L and N. Even though L and N cannot generally be found exactly from L, L 2, N and N 2, t s possble to fnd upper and lower bounds for them. These are useful n provdng nsght nto the operaton of the cascade. It can be easly shown that snce I = I, J = I 2, and J = J 2, L N = (L + L 2 ) (N + N 2 ) (7.33) It s then straghtforward (though perhaps tedous) to show that the loss L for the overall process s not always equal to the sum of the losses for the two components L + L 2, but nstead so that the loss s bounded from above and below. Also, so that f the frst process s nose-free then L s exactly L + L 2. There are smlar formulas for N n terms of N + N 2 : 0 L L L + L 2 (7.34) L + L 2 N L L + L 2 (7.35) 0 N 2 N N + N 2 (7.36) N + N 2 L 2 N N + N 2 (7.37) Smlar formulas for the mutual nformaton of the cascade M follow from these results: M L 2 M M I (7.38) M L 2 M M + N L 2 (7.39) M 2 N M M 2 J (7.40) M 2 N M M 2 + L 2 N (7.4)
13 7.7 Cascaded Processes 94 Other formulas for M are easly derved from Equaton 7.9 appled to the frst process and the cascade, and Equaton 7.24 appled to the second process and the cascade: M = M + L L = M + N + N 2 N L 2 = M 2 + N 2 N = M 2 + L 2 + L L N (7.42) where the second formula n each case comes from the use of Equaton Note that M cannot exceed ether M or M 2,.e., M M and M M 2. Ths s consstent wth the nterpretaton of M as the nformaton that gets through nformaton that gets through the cascade must be able to get through the frst process and also through the second process. As a specal case, f the second process s lossless, L 2 = 0 and then M = M. In that case, the second process does not lower the mutual nformaton below that of the frst process. Smlarly f the frst process s noseless, then N = 0 and M = M 2. The channel capacty C of the cascade s, smlarly, no greater than ether the channel capacty of the frst process or that of the second process: C C and C C 2. Other results relatng the channel capactes are not a trval consequence of the formulas above because C s by defnton the maxmum M over all possble nput probablty dstrbutons the dstrbuton that maxmzes M may not lead to the probablty dstrbuton for the nput of the second process that maxmzes M 2.
Lecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More information= z 20 z n. (k 20) + 4 z k = 4
Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)
ECE 534: Elements of Informaton Theory Solutons to Mdterm Eam (Sprng 6) Problem [ pts.] A dscrete memoryless source has an alphabet of three letters,, =,, 3, wth probabltes.4,.4, and., respectvely. (a)
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationTemperature. Chapter Heat Engine
Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationTHE SUMMATION NOTATION Ʃ
Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the
More informationESCI 341 Atmospheric Thermodynamics Lesson 10 The Physical Meaning of Entropy
ESCI 341 Atmospherc Thermodynamcs Lesson 10 The Physcal Meanng of Entropy References: An Introducton to Statstcal Thermodynamcs, T.L. Hll An Introducton to Thermodynamcs and Thermostatstcs, H.B. Callen
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationEntropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)
Entropy of Marov Informaton Sources and Capacty of Dscrete Input Constraned Channels (from Immn, Codng Technques for Dgtal Recorders). Entropy of Marov Chans We have already ntroduced the noton of entropy
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationLecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.
Lecture 14 (03/27/18). Channels. Decodng. Prevew of the Capacty Theorem. A. Barg The concept of a communcaton channel n nformaton theory s an abstracton for transmttng dgtal (and analog) nformaton from
More informationHopfield Training Rules 1 N
Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationUniversity of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015
Lecture 2. 1/07/15-1/09/15 Unversty of Washngton Department of Chemstry Chemstry 453 Wnter Quarter 2015 We are not talkng about truth. We are talkng about somethng that seems lke truth. The truth we want
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationQuantum and Classical Information Theory with Disentropy
Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationExpected Value and Variance
MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationLecture Space-Bounded Derandomization
Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationThermodynamics and statistical mechanics in materials modelling II
Course MP3 Lecture 8/11/006 (JAE) Course MP3 Lecture 8/11/006 Thermodynamcs and statstcal mechancs n materals modellng II A bref résumé of the physcal concepts used n materals modellng Dr James Ellott.1
More informationConvergence of random processes
DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationComplete subgraphs in multipartite graphs
Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G
More information2 More examples with details
Physcs 129b Lecture 3 Caltech, 01/15/19 2 More examples wth detals 2.3 The permutaton group n = 4 S 4 contans 4! = 24 elements. One s the dentty e. Sx of them are exchange of two objects (, j) ( to j and
More information} Often, when learning, we deal with uncertainty:
Uncertanty and Learnng } Often, when learnng, we deal wth uncertanty: } Incomplete data sets, wth mssng nformaton } Nosy data sets, wth unrelable nformaton } Stochastcty: causes and effects related non-determnstcally
More informationPrinciple of Maximum Entropy
Chapter 9 Prncple of Maxmum Entropy Secton 8.2 presented the technque of estmatng nput probabltes of a process that are unbased but consstent wth known constrants expressed n terms of averages, or expected
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More information1 Generating functions, continued
Generatng functons, contnued. Generatng functons and parttons We can make use of generatng functons to answer some questons a bt more restrctve than we ve done so far: Queston : Fnd a generatng functon
More informationEGR 544 Communication Theory
EGR 544 Communcaton Theory. Informaton Sources Z. Alyazcoglu Electrcal and Computer Engneerng Department Cal Poly Pomona Introducton Informaton Source x n Informaton sources Analog sources Dscrete sources
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationDepartment of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification
Desgn Project Specfcaton Medan Flter Department of Electrcal & Electronc Engneeng Imperal College London E4.20 Dgtal IC Desgn Medan Flter Project Specfcaton A medan flter s used to remove nose from a sampled
More informationLecture 4: Universal Hash Functions/Streaming Cont d
CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationDensity matrix. c α (t)φ α (q)
Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationThe Second Anti-Mathima on Game Theory
The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationHidden Markov Models
CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationEntropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or
Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationAdvanced Circuits Topics - Part 1 by Dr. Colton (Fall 2017)
Advanced rcuts Topcs - Part by Dr. olton (Fall 07) Part : Some thngs you should already know from Physcs 0 and 45 These are all thngs that you should have learned n Physcs 0 and/or 45. Ths secton s organzed
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More information3.1 ML and Empirical Distribution
67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum
More informationExample: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,
The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson
More informationSTATISTICAL MECHANICAL ENSEMBLES 1 MICROSCOPIC AND MACROSCOPIC VARIABLES PHASE SPACE ENSEMBLES. CHE 524 A. Panagiotopoulos 1
CHE 54 A. Panagotopoulos STATSTCAL MECHACAL ESEMBLES MCROSCOPC AD MACROSCOPC ARABLES The central queston n Statstcal Mechancs can be phrased as follows: f partcles (atoms, molecules, electrons, nucle,
More informationHopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen
Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The
More informationCSE 599d - Quantum Computing Introduction to Quantum Error Correction
CSE 599d - Quantum Computng Introducton to Quantum Error Correcton Dave Bacon Department of Computer Scence & Engneerng, Unversty of Washngton In the last lecture we saw that open quantum systems could
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationa b a In case b 0, a being divisible by b is the same as to say that
Secton 6.2 Dvsblty among the ntegers An nteger a ε s dvsble by b ε f there s an nteger c ε such that a = bc. Note that s dvsble by any nteger b, snce = b. On the other hand, a s dvsble by only f a = :
More informationLecture 10: May 6, 2013
TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,
More informationLecture 17 : Stochastic Processes II
: Stochastc Processes II 1 Contnuous-tme stochastc process So far we have studed dscrete-tme stochastc processes. We studed the concept of Makov chans and martngales, tme seres analyss, and regresson analyss
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationA Comparison between Weight Spectrum of Different Convolutional Code Types
A Comparson between Weght Spectrum of fferent Convolutonal Code Types Baltă Hora, Kovac Mara Abstract: In ths paper we present the non-recursve systematc, recursve systematc and non-recursve non-systematc
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationCase A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.
THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty
More informationComputational Biology Lecture 8: Substitution matrices Saad Mneimneh
Computatonal Bology Lecture 8: Substtuton matrces Saad Mnemneh As we have ntroduced last tme, smple scorng schemes lke + or a match, - or a msmatch and -2 or a gap are not justable bologcally, especally
More informationA how to guide to second quantization method.
Phys. 67 (Graduate Quantum Mechancs Sprng 2009 Prof. Pu K. Lam. Verson 3 (4/3/2009 A how to gude to second quantzaton method. -> Second quantzaton s a mathematcal notaton desgned to handle dentcal partcle
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationTHE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.
THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall
More informationJAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger
JAB Chan Long-tal clams development ASTIN - September 2005 B.Verder A. Klnger Outlne Chan Ladder : comments A frst soluton: Munch Chan Ladder JAB Chan Chan Ladder: Comments Black lne: average pad to ncurred
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationOnline Appendix to: Axiomatization and measurement of Quasi-hyperbolic Discounting
Onlne Appendx to: Axomatzaton and measurement of Quas-hyperbolc Dscountng José Lus Montel Olea Tomasz Strzaleck 1 Sample Selecton As dscussed before our ntal sample conssts of two groups of subjects. Group
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationHidden Markov Models
Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n gvng your own lectures. Feel free to use these sldes verbatm, or to modfy them to ft your
More informationInformation Geometry of Gibbs Sampler
Informaton Geometry of Gbbs Sampler Kazuya Takabatake Neuroscence Research Insttute AIST Central 2, Umezono 1-1-1, Tsukuba JAPAN 305-8568 k.takabatake@ast.go.jp Abstract: - Ths paper shows some nformaton
More information