Synergy, Redundancy, and Independence in Population Codes

Size: px
Start display at page:

Download "Synergy, Redundancy, and Independence in Population Codes"

Transcription

1 The Journal of Neurocience, December 17, (37): Behavioral/Sytem/Cognitive Synergy, Redundancy, and Independence in Population Code Elad Schneidman, 1,2 William Bialek, 2 and Michael J. Berry II 1 Department of 1 Molecular Biology and 2 Phyic, Princeton Univerity, Princeton, New Jerey A key iue in undertanding the neural code for an enemble of neuron i the nature and trength of correlation between neuron and how thee correlation are related to the timulu. The iue i complicated by the fact that there i not a ingle notion of independence or lack of correlation. We ditinguih three kind: (1) activity independence; (2) conditional independence; and (3) information independence. Each notion i related to an information meaure: the information between cell, the information between cell given the timulu, and the ynergy of cell about the timulu, repectively. We how that thee meaure form an interrelated framework for evaluating contribution of ignal and noie correlation to the joint information conveyed about the timulu and that at leat two of the three meaure mut be calculated to characterize a population code. Thi framework i compared with other recently propoed in the literature. In addition, we ditinguih quetion about how information i encoded by a population of neuron from how that information can be decoded. Although information theory i natural and powerful for quetion of encoding, it i not ufficient for characterizing the proce of decoding. Decoding fundamentally require an error meaure that quantifie the importance of the deviation of etimated timuli from actual timuli. Becaue there i no a priori choice of error meaure, quetion about decoding cannot be put on the ame level of generality a for encoding. Key word: encoding; decoding; neural code; information theory; ignal correlation; noie correlation Received March 12, 2003; revied Sept. 15, 2003; accepted Sept. 17, Thi work wa upported by a Pew Scholar Award and a grant from the E. Mathilda Ziegler Foundation to M.J.B. and by a grant from the Rothchild Foundation to E.S. We thank Adrienne Fairhall for many helpful dicuion. Correpondence hould be addreed to Michael J. Berry II, Department of Molecular Biology, Princeton Univerity, Princeton, NJ berry@princeton.edu. Copyright 2003 Society for Neurocience /03/ $15.00/0 Introduction One of the fundamental inight of neurocience i that ingle neuron make a mall, but undertandable, contribution to an animal overall behavior. However, mot behavior involve large number of neuron, thouand or even million. In addition, thee neuron often are organized into layer or region, uch that nearby neuron have imilar repone propertie. Thu, it i natural to ak under what condition group of neuron repreent timuli and direct behavior in either a ynergitic, redundant, or independent manner. With the increaing availability of multielectrode recording, it now i poible to invetigate how enory data or motor intention are encoded by group of neuron and whether that population activity differ from what can be inferred from recording of ingle neuron. Complementary to thi quetion i how population activity can be decoded and ued by ubequent neuron. The code by which ingle neuron repreent and tranmit information ha been tudied intenively (Perkel and Bullock, 1968; Rieke et al., 1997; Dayan and Abbott, 2001). Many of the conceptual approache and analytic tool ued for the ingle neuron cae can be extended to the multiple neuron cae. The key additional iue i the nature and trength of correlation between neuron. Such correlation have been meaured uing imultaneou recording, and their influence on population encoding ha been aeed with a variety of method (Perkel et al., 1967; Matronarde, 1983; Aerten et al., 1989; Gray and Singer, 1989; Abele et al., 1993; Laurent and Davidowitz, 1994; Meiter et al., 1995; Vaadia et al., 1995; Krahe et al., 2002). The intuitive notion of ynergy ha been quantified in variou ytem uing information theory (Gawne and Richmond, 1993; Gat and Tihby, 1999; Brenner et al., 2000; Peteren et al., 2001). Studie of population decoding have examined how animal might extract information from multiple pike train (Georgopoulo et al., 1986; Abele et al., 1993; Zohary et al., 1994; Warland et al., 1997; Brown et al., 1998; Hatopoulo et al., 1998), a well a the limit of poible decoding algorithm (Palm et al., 1988; Seung and Sompolinky, 1993; Salina and Abbott, 1994; Brunel and Nadal, 1998; Zemel et al., 1998). Here, we decribe a quantitative framework for characterizing population encoding uing information theoretic meaure of correlation. We ditinguih the ource of correlation that lead to ynergy and redundancy and define bound on thoe quantitie. We alo dicu the conequence of auming independence for neuron that are actually correlated. Many of the quantitie we define have been publihed previouly (Gawne and Richmond, 1993; Gat and Tihby, 1999; Panzeri et al., 1999; Brenner et al., 2000; Chechik et al., 2002). Here, we bring them together, how their interrelation, and compare to alternative definition. In particular, Nirenberg et al. (2001, 2003) have propoed a meaure of the amount of information lot when a decoder ignore noie correlation. We how that their interpretation of thi quantity i incorrect, becaue it lead to contradiction, including that in ome circumtance, the amount of information lo may be

2 11540 J. Neuroci., December 17, (37): Schneidman et al. Synergy, Redundancy, and Independence in Population Code often been ueful to ummarize it propertie with a mall number of function, uch a the pike-triggered average timulu or the firing rate a a function of timulu parameter. An epecially appealing meaure i the mutual information between the timuli and the repone (Shannon and Weaver, 1949; Cover and Thoma, 1991): I S; R S r R p, r log 2 p, r p p r bit, (1) Figure 1. A diagram of neural encoding and decoding. A pair of neuron, 1 and 2, encode information about a timulu, (t), with pike train, r 1 (t ) and r 2 (t ). Thi may be decribed by the conditional probability ditribution of the repone given the timulu p(r 1, r 2 ). Decoding i the proce of trying to extract thi information explicitly, which may be done by other neuron or by the experimentalit. Thi proce i decribed by a function, F, that act on r 1 and r 2 and give an etimated verion of the timulu. greater than the amount of information that i preent. We argue that their meaure i related more cloely to quetion of decoding than encoding, and we dicu it interpretation. Reult To undertand the manner in which neuron repreent information about the external world, it i important to ditinguih the concept of encoding and decoding. Figure 1 how a chematic of encoding and decoding for a pair of neuron. Encoding i the converion of timuli into neural repone; thi proce i what we oberve experimentally. Decoding i a procedure that ue the neural pike train to etimate feature of the original timulu or make a behavioral deciion. The experimentalit ue a choen algorithm to either recontruct timulu feature or to predict a motor or behavioral outcome. The goal i to undertand how information encoded by neuron can be explicitly recovered by downtream neuron and what deciion the animal might make baed on thee neural repone. Neural encoding In general, neural repone are noiy, meaning that repeated preentation of the ame timulu give rie to different repone (Verveen and Derken, 1968; Mainen and Sejnowki, 1995; Bair and Koch, 1996). Although the oberved noie often ha a component caued by incomplete control of experimental variable, all neural ytem exhibit ource of noie that operate even under ideal experimental condition. Thu, the relationhip between a timulu and the reulting neural repone mut be decribed by a probabilitic dictionary (for review, ee Rieke et al., 1997). In particular, for every poible timulu, there i a probability ditribution over the poible repone r given that timulu, namely p(r ). Quetion of neural encoding involve what repone variable repreent information about the timulu, what feature of the timulu are repreented, and pecifically how much one can learn about the timulu from the neural repone. Given the ditribution of timuli in the environment, p(), the encoding dictionary p(r ) contain the anwer to thee quetion. Becaue the encoding dictionary i a complex object, it ha where S denote the et of timuli {} and R denote the et of repone {r}. The mutual information meaure how tightly neural repone correpond to timuli and give an upper bound on the number of timulu pattern that can be dicriminated by oberving the neural repone. It value range from zero to either the entropy of the timuli or the entropy of the repone, whichever i maller. The mutual information i zero when there i no correlation between timuli and repone. The information equal the entropy of the timulu when each poible timulu generate a uniquely identifiable repone, and it equal the entropy of the repone when there i no noie (Shannon and Weaver, 1949). Many author have tudied ingle neuron encoding uing information theory (Mackay and McCulloch, 1952; Fitzhugh, 1957; Eckhorn and Popel, 1974; Abele and La, 1975; Optican and Richmond, 1987; Bialek et al., 1991; Strong et al., 1998). Mutual information i appealing for everal reaon. Firt, it i a very general meaure of correlation between timulu and repone and can be thought of a including contribution from all other meaure of correlation. Second, it doe not make aumption about what feature of the timuli or repone are relevant, which make information theory uniquely well uited to the analyi of neural repone to complex, naturalitic timuli (Lewen et al., 2001). Third, a ignal flow through the nervou ytem, information can be lot but never gained, a property known a the data proceing inequality (Cover and Thoma, 1991). Finally, mutual information i the unique functional of the encoding dictionary that obey imple plauible contraint, uch a additivity of information for truly independent ignal (Shannon and Weaver, 1949). For thee reaon, we focu here on an information theoretic characterization of population encoding. Spike train entropie and mutual information are notoriouly difficult to etimate from limited experimental data. Although thi i an important technical difficulty, there are many cae in which the mutual information ha been etimated for real neuron reponding to complex, dynamic input, with detailed correction for ampling bia (Strong et al., 1998; Berry and Meiter, 1998; Buraca et al., 1998; Reich et al., 2000; Reinagel and Reid, 2000). Many author have explored trategie for etimating pike train entropie (Treve and Panzeri, 1995; Strong et al., 1998; Victor, 2002; Nemenman et al., 2003; Paninki, 2003), and there i continuing interet in finding improved trategie. We emphaize that thee technical difficultie can and hould be eparated from the conceptual quetion involving which information theoretic quantitie are intereting to calculate and what they mean. Encoding veru decoding While the concept of encoding i relatively traightforward for neuron, decoding i more ubtle. Many author think implicitly or explicitly about an intermediate tep in decoding, namely the

3 Schneidman et al. Synergy, Redundancy, and Independence in Population Code J. Neuroci., December 17, (37): formation of the conditional timulu ditribution, p( r), uing Baye rule: p r p r p. (2) p r Thi probability ditribution decribe how one knowledge of the timulu change when a particular neural repone i oberved; thi ditribution contain all of the encoded information (de Ruyter van Stevenick and Bialek, 1988). Some even call thi intermediate tep decoding (Dayan and Abbott, 2001). Although thi ditinction might be viewed a emantic, we note that the action of a timulu repone pathway in an organim reult in an actual motor output, not a ditribution of poible output. Thu, the deciion-making proce that produce a ingle output i different from forming p( r) and i neceary to ue the information encoded by neural pike train. Furthermore, there are ome method of timulu etimation, uch a linear decoding, that do not make explicit reference to p( r) (Bialek et al., 1991), o thi intermediate tep i not alway required. For thee reaon, we prefer to think of decoding a the proce that actually etimate the timulu and the formation of the conditional timulu ditribution, where relevant, a the raw material on which many decoding algorithm act. A uch, we refer to thi ditribution a a decoding dictionary. In the cae of encoding, there i a ingle repone ditribution to be meaured, p(r ), and the mutual information between timulu and repone implied by thi ditribution provide a powerful characterization of the encoding propertie of thee neural repone. However, in the cae of decoding, there are many poible algorithm that can be ued on the ame neural repone. Often, one talk about an optimal decoder, meaning that one chooe a cla of poible decoding algorithm and adjut the pecific parameter of that algorithm for the bet reult. Thi raie the quetion of what make one decoder better than another. One obviou figure of merit i the information that the etimated timulu convey about the original timulu, I(S; S et ). Intuitively, the bet decoder i the one that capture the mot of the encoded information. Furthermore, the data proceing inequality implie that I(S; S et ) I(S; R), o that there i an abolute tandard againt which to make thi comparion. Unfortunately, mutual information alone i an inufficient meaure with which to evaluate the ucce of a decoder. Mutual information only meaure the correpondence between the original and etimated timulu, not whether the etimated timulu equal or approximate the original timulu. Thi fact i hown in Figure 2 by an example of a perfectly crambled decoder. Thi decoder achieve a one-to-one mapping between the etimated and original timuli but alway make the wrong etimate. Such a decoder retain all of the information about the timulu but i obviouly doing a bad job. For an organim to appropriately act on the information encoded by neural pike train, it mut actually make the correct etimate. Thu, decoder fundamentally mut be evaluated with repect to an error meaure, E(, et ), that decribe the penalty for difference between the etimated and original timuli. Importantly, there i no univeral meaure of whether an error i large or mall. For intance, a particular error in etimating the location of a tree branch may be fatal if you are a monkey trying to jump from one branch to the next but acceptable when trying to reach for a piece of fruit. Error may alo be trongly aymmetric: failing to notice the preence of a predator may reult in death, wherea unnecearily executing an ecape repone only wate Figure 2. Schematic of a crambled decoding proce. Six timuli, {}, are encoded by neural repone and mapped by a decoder onto ix etimated timuli, { et }. Thi mapping i one-toone, o it preerve all the information in the timulu. However, the etimate are crambled, o that thi decoder never give the correct anwer. finite reource. Thu, any notion of a natural meaure of the error tem from the objective that the decoder i trying to achieve. Becaue there i no correct error meaure againt which to judge the ucce of a decoder, tatement about decoding cannot be put on the ame level of generality a tatement about encoding. Information theory can till play a role in characterizing decoding, but only in conjunction with an error meaure. Population encoding Many quetion about the nature of encoding by a population of neuron are extenion of the quetion dealing with a ingle neuron. Intead of tudying the ingle-cell repone ditribution, we need to ue the et of repone of N neuron, given by p(r ), where r {r 1, r 2,...,r N }. Similarly, uing the joint probability ditribution, p(, r ), we can calculate the mutual information between the et of repone and the timulu. For two cell: I S; R 1, R 2 p(, r 1, r 2 )log 2 p, r 1, r 2 (3) p p r 1, r 2. The main additional iue for neural encoding by a population of cell i the correlation among thee cell and how thee correlation relate to the timulu. To undertand how a population code differ from the code of it contituent neuron, we mut identify appropriate meaure of correlation and independence and quantify their relation to the timulu. In many way, the quetion of how repone of multiple neuron can be combined to provide information about the timulu i related to the quetion of how ucceive repone (pike, burt, etc.) of a ingle neuron can be combined to provide information about a timulu that varie in time (ee, for example, Brenner et al., 2000). Three kind of independence Independence and correlation are complementary concept: independence i the lack of correlation. The tatitic community ha long noted the ditinction between independence and conditional independence and it implication (Dawid, 1979). Thi ditinction ha been applied to neurocience in the claic work of Perkel et al. (1967). Following their example, it ha been common to ue cro-correlation a a meaure of thee dependencie (Palm et al., 1988). In the cae of the neural code, we are intereted primarily in the relation between timuli and repone, which i itelf another form of correlation. Thu, for neural code, there are three kind of independence. Thi diverity i the reult of the fact that different ource of correlation have different impact on

4 11542 J. Neuroci., December 17, (37): Schneidman et al. Synergy, Redundancy, and Independence in Population Code the manner in which neural activity encode information about a timulu (Gawne and Richmond, 1993; Gat and Tihby, 1999; Panzeri et al., 1999; Brenner et al., 2000; Chechik et al., 2002). Thee notion are ditinct in the ene that if a pair of neuron poee one form of independence, it doe not necearily poe the other. Here, we preent definition of the three kind of independence, along with correponding information theoretic meaure of correlation, which quantify how cloe the neuron are to being independent. Activity independence. The mot baic notion of correlation i that the activity of one cell, r 1, depend on the activity of another cell, r 2, when averaged over the enemble of timuli. Thi notion of correlated activity i aeed by looking at the joint ditribution of the repone of a cell pair, p(r 1, r 2 ). Thi joint ditribution can be found from the imultaneouly recorded repone by umming over timuli: p r 1, r 2 p(r 1, r 2 )p(). (4) If there i no correlated activity between the pair of cell, then thi ditribution factor: p r 1, r 2 p r 1 p r 2. (5) The natural meaure of the degree of correlation between the activity of two neuron i the information that the activity of one cell convey about the other: I R 1 ; R 2 bit. (6) p r 1 p r 2 p r 1, r 2 log 2 p r 1, r 2 If the activity of the cell i independent, then I(R 1 ; R 2 ) 0. Becaue the information i bounded from above by the entropy of the repone of each cell, it i poible to ue a normalized meaure, I(R 1 ;R 2 )/min[h(r 1 ), H(R 2 )], where H(R i ) i the entropy of the repone of cell i. Thi normalized meaure range between 0 and 1. The value of I(R 1 ;R 2 ) implicitly depend on the timulu enemble S, a can be een from Equation 4. For implicity, we leave thi dependence out of our notation, but one hould keep in mind that activity independence i a property of both a population of neuron and an enemble of timuli. One could ak, perhap more abtractly, for a meaure of imilarity between the ditribution p(r 1, r 2 ) and p(r 1 ) p(r 2 ) and then interpret thi meaure a a degree of (non)independence. There are even other, common information theoretic meaure, uch a the Kullback Leibler (KL) divergence (Cover and Thoma, 1991) or the Jenen Shannon divergence (Lin, 1991). It i important to note that all uch imilarity meaure are anwer to pecific quetion and, a uch, cannot necearily be ued interchangeably. For intance, the Jenen Shannon divergence meaure how reliably one can decide if a given repone come from the joint ditribution, p(r 1, r 2 ), or the product ditribution, p(r 1 ) p(r 2 ), given that thee are the only alternative. It ha a maximal value of 1 bit, when the two ditribution are perfectly ditinguihable. In contrat, the mutual information ha a maximal value equal to the pike train entropy, when the two repone are identical. In thi cae, the KL divergence between p(r 1, r 2 ) and p(r 1 ) p(r 2 ) i, in fact, identical to the mutual information between R 1 and R 2. Thi hold becaue the mutual information i a pecial type of KL divergence, one that i taken between two particular probability ditribution. However, the convere i not true: the KL divergence between two arbitrary ditribution i not necearily a mutual information. Therefore, the pecific quetion anwered by the KL divergence are, in general, different from thoe anwered by the mutual information (ee below for a dicuion of the interpretation of the KL divergence). The mutual information I(R 1 ; R 2 ) meaure directly how much (in bit) the repone of one cell predict about the repone of the other. We will ee that thi mutual predictability contribute to redundancy in what the cell can tell u about their timulu. In addition to being an appealing and general meaure of correlation, we will ee below that thi choice of information meaure reult in an interrelated framework for the three different kind of independence. Conditional independence. Correlated activity between two neuron can arie either from hared timulation, uch a from correlation in their timuli or overlap in their receptive field, or from hared ource of noie, uch a a preynaptic neuron that project to both neuron or a common ource of neuromodulation. In the former cae, the correlation between neuron can be explained from knowledge of how each neuron alone repond to the timulu, wherea in the latter cae they cannot. Therefore, an important ditinction i whether the correlation are olely attributable to the timulu ( ignal correlation) or not ( noie correlation). Although thi nomenclature i widely ued, one hould keep in mind that noie correlation are not alway detrimental to the neural code. The trength of noie correlation can be aeed by looking at the joint ditribution of neural activity conditioned on the timulu p(r 1, r 2 ). If two neuron repond independently to the timulu, they are called conditionally independent, and the ditribution of repone factor for all : p r 1, r 2 p r 1 p r 2. (7) A in the cae of activity independence, a natural meaure of conditional independence i the mutual information between cell given the timulu I R 1 ; R 2 p r 1, r 2 log 2 p r 1, r 2 (8) p r 1 p r 2. By meauring the dependence between neuron for each timulu, thi quantity ignore all correlation that arie from hared timulation and, thu, equal zero only if there are no noieinduced correlation. A normalized meaure i I(R 1 ;R 2 )/ min(h(r 1 ),H(R 2 )), which range between 0 and 1. For many purpoe, it i ueful to compute the average over timuli, I(R 1 ; R 2 ). The ditinction between ignal and noie correlation relate directly to an important ditinction in experimental technique: noie correlation can only be meaured by recording imultaneouly from a pair of neuron. A imple technique of demontrating the exitence of noie correlation i the huffle tet or hift predictor (Perkel et al., 1967; Palm et al., 1988), where the cro-correlation between imultaneouly recorded pair of neuron are compared on the ame timulu trial veru different timulu trial. Of coure, a a practical matter, it i preferable to meaure even ignal correlation imultaneouly and from the ame preparation, becaue of nontationaritie in neural repone. Although the huffle-corrected cro-correlation function may eem intuitive and traightforward, it actually uffer from ambiguitie in how to normalize and interpret it value. The apparent trength of cro-correlation between two neuron de-

5 Schneidman et al. Synergy, Redundancy, and Independence in Population Code J. Neuroci., December 17, (37): pend on the auto-correlation function of each neuron, o that oberved change in cro-correlation contain thi potential confound (Brody, 1999). Alo, the cro-correlation function can be expreed in different unit: firing rate of one cell relative to the other, fraction of total pike within a time window, etc. There are ubtle difference between thee choice of unit (uch a whether the meaure i ymmetric) that make their interpretation tricky. In contrat, the quantity I(R 1 ; R 2 ) provide a characterization of noie correlation that reolve thee ambiguitie, ha a clear-cut interpretation, and i enitive to form of correlation not captured by the huffle-corrected correlogram (e.g., if the repone of one neuron i more precie when the other neuron i active). Pair of neuron that are conditionally independent are not necearily activity independent, becaue hared timulation may till induce correlation in their repone when averaged over the entire timulu enemble. For a imple example, conider two binary neuron that produce either a pike or no pike in repone to two, equally likely timuli. They each repond to the firt timulu with a 50% probability of piking, but neither fire in repone to the econd timulu. Thee neuron poe conditional independence, becaue their joint repone ditribution factor for each timulu, but not activity independence, becaue if one cell tay ilent, the other i more likely to tay ilent. Converely, pair of neuron that are activity independent are not necearily conditionally independent, becaue noie correlation may increae the probability that neuron fire together for ome timuli and decreae it for other, uch that thoe contribution roughly cancel when averaged over the timulu enemble. For an example of thi cae, conider an extreme intance of timulu-dependent correlation: binary neuron uch that for the firt timulu either both fire or both remain ilent with equal probability, but for the econd timulu, either one fire a pike and the other remain ilent, or vice vera, with equal probability. Here, the neuron are poitively correlated for the firt timulu and negatively correlated for the econd. They are clearly not conditionally independent, but becaue the poitive and negative correlation occur with equal trength, they are activity independent. Notice that if the two timuli occur with unequal probability, then the cell pair i no longer activity independent. A thee example demontrate, activity independence and conditional independence are ditinct meaure of correlation between neuron. Information independence. A final notion of correlation relate to the information encoded by a cell pair. Intuitively, if the cell are enitive to completely different feature of the timulu, then the information they convey together hould jut be the um of what they convey eparately: I S; R 1, R 2 I S; R 1 I S; R 2. (9) Cell pair that do not encode information independently can be either ynergitic, meaning that they convey more information in their joint repone than the um of their individual information, or redundant, meaning that they jointly convey le. Thu, the obviou meaure of information independence i the ynergy (Gawne and Richmond, 1993; Gat and Tihby, 1999; Panzeri et al., 1999; Brenner et al., 2000): Syn R 1, R 2 I S; R 1, R 2 I S; R 1 I S; R 2. (10) Negative value of thi quantity indicate redundancy. A normalized verion of the ynergy i given by Syn(R 1, R 2 )/I(S; R 1, R 2 ), which range between 1, when the repone of the two neuron Figure 3. Graphical preentation of ynergy a a combination of other meaure of independence. A, Following Equation 11, we can repreent the ynergy or redundancy of a pair of cell a a point in a plane with the axe I(R 1 ; R 2 ) and I(R 1 ; R 2 ). Becaue both of thee meaure are non-negative, only the top right quadrangle i allowed. Neuron that poe activity independence lie on point along the abcia. Neuron that poe conditional independence lie on point along the ordinate. Information independence correpond to the diagonal that eparate the ynergitic value from the redundant one. B, Similarly, following Equation 16, we can alo expre the ynergy a a point in a plane with the axe I noie and I ignal. Becaue I ignal i non-negative, only the top half plane i allowed. are related by a one-to-one mapping, and 1, when the cell pair only convey information by it joint repone and there i zero information contained in the repone of each individual cell. It i important to note that ynergy, a defined here, i a property that i averaged over the timulu enemble. Cell pair can be ynergitic for ome ubet of the timuli, redundant during other, and independent for yet other timuli. Hence, when cell are found to be information independent, thi may reult from averaging over ynergitic and redundant period rather than from independence at all time. An alternative way to write the ynergy i a the difference between the mutual information between the cell given the timulu and the information that they hare that i not explicitly related to the timulu (Brenner et al., 2000): Syn R 1, R 2 I R 1 ; R 2 I R 1 ; R 2, (11) which i a combination of the meaure of conditional and activity independence (ee Eq. 6 and 8). If a pair of neuron poee both activity and conditional independence, then there i no ynergy or redundancy. However, information independence may hold without activity independence and conditional independence, when thee two term cancel. Thu, the three meaure of independence and correlation are interconnected, giving a tructured framework for the quantification of correlation and independence. Figure 3A how a graphic preentation of ynergy a a combination of the two other independence meaure, reflecting that two dimenion are needed to decribe the nature of neural (in)dependence. Becaue each term in Equation 11 i non-negative, the firt

6 11544 J. Neuroci., December 17, (37): Schneidman et al. Synergy, Redundancy, and Independence in Population Code term contribute only ynergy and the econd only redundancy. By writing the ynergy in thi form, one can readily ee that I(R 1 ;R 2 ) i an upper bound on the ynergy. Becaue thi term i non-negative for all timuli, there can be no cancellation in it value when the cell pair i ynergitic for ome timuli and redundant for other. Similarly, I(R 1 ; R 2 ) i a bound on the redundancy of a pair of neuron. Auming conditional independence Sampling the ditribution of joint repone of pair or group of cell require, in general, exponentially more data than the ingle cell cae. Hence, the characterization of neural population activity i often everely contrained by experimental limitation. Becaue it i eaier to ample the repone of individual cell, even when neuron can be recorded imultaneouly, one may try to approximate the joint ditribution by auming that the cell pair i conditionally independent. Furthermore, when uing recording from different trial (Georgopoulo et al., 1986), or even different animal (Chechik et al., 2002), one mut make thi aumption. When ignoring the fact that the pair of cell were recorded imultaneouly or when combining the nonimultaneou recording of cell preented with the exact ame timulu, a cutomary gue for the joint repone ditribution i given by: p huffle r 1, r 2 p r 1 p r 2. (12) We ue the notation huffle, becaue thi i the joint repone ditribution that would reult from compiling the repone of imultaneouly recorded cell from different, or huffled, timulu trial (imilar to the hift predictor ) (Perkel et al., 1967; Palm et al., 1988). Notice alo that thi aumption implie that the trength of noie correlation meaured by Equation 8 i zero. The information that the huffled cell repone convey about the timulu i given by: I huffle S; R 1, R 2 p p r 1 p r 2 log 2 p r 1 p r 2 p r 1 p r 2 p. (13) The difference between the information conveyed by a cell pair in the real cae and I huffle, I noie I S; R 1, R 2 I huffle S; R 1, R 2, (14) meaure the contribution of noie-induced correlation to the encoded information. Thi value may be either poitive or negative, depending on whether thoe correlation lead to ynergy or redundancy (for pecific example, ee Fig. 5). Furthermore, the difference between the um of the information that each of the cell individually convey about the timulu and I huffle : I ignal I S; R 1 I S; R 2 I huffle S; R 1, R 2, (15) meaure the effect of ignal-induced correlation on the encoded information. Thi value i non-negative (ee Appendix A), becaue ignal correlation indicate that the two cell are, in part, encoding identical information and, thu, implie redundancy. The difference between thee two term give the ynergy of the two cell: Syn R 1, R 2 I noie I ignal. (16) When neuron are not recorded imultaneouly, one typically aume that I noie 0. With thi aumption and the fact that I ignal i non-negative, the only poible reult i apparent net redundancy. Thi i reflected in Figure 3B, which give a graphic preentation of the ignal and noie component a the two dimenion that pan the ynergy. We emphaize that although the I ignal and I noie quantify the influence of ignal and noie correlation, unlike the quantitie defined previouly, thee are not mutual information meaure. Population encoding for three or more neuron In the preceding ection, we focued on the cae of two neuron. The baic ditinction we made between activity and conditional independence a well a their connection to the ditinction between ignal and noie correlation will hold for the cae of three or more neuron. One hould note, however, that correlation among n neuron can be aeed in more than one way. For intance, one can compare the correlation among n neuron to the correlation only obervable among n 1 neuron (Martignon et al., 2000) or one can compare n neuron correlation to n independent ingle cell (Chechik et al., 2002). For the cae of two cell, thee two comparion are the ame, but for three or more cell they differ (Schneidman et al., 2003). Comparion to other meaure Approximate conditional timulu ditribution In a recent tudy, Nirenberg et al. (2001) tudied the importance of noie correlation for how information i encoded by pair of ganglion cell in the retina. Noie correlation can be ignored explicitly by auming that the joint repone ditribution for two neuron i given by Equation 7. Baye rule can be ued to find the timulu ditribution conditioned on the neural repone for that cae: p huffle r 1, r 2 p r 1 p r 2 p p r 1 p r 2 p. (17) Nirenberg et al. (2001) denoted thi quantity by p ind ( r 1, r 2 ), but we ue p huffle to avoid confuion between different kind of independence. They uggeted uing the KL divergence between the true decoding dictionary p( r 1, r 2 ) and the approximate dictionary p huffle ( r 1, r 2 ) to quantify the amount of information that i lot by uing a decoder that aume conditional independence. Averaged over the real, correlated repone, r 1 and r 2, one obtain: Dˆ p r 1, r 2 D KL p r 1, r 2 p huffle r 1, r 2, (18) Thi meaure doe not refer to any pecific algorithm for etimating the timulu or error made by that algorithm but, intead, i meant to be a general characterization of the ability of any decoder to make dicrimination about the timulu, if knowledge of the noie correlation i ignored. Nirenberg et al. (2001) argued that it i appropriate to conider an approximate decoding dictionary combined with the real pike train, becaue the brain alway automatically ha acce to the real, correlated pike train but may make implifying aumption about how to decode the

7 Schneidman et al. Synergy, Redundancy, and Independence in Population Code J. Neuroci., December 17, (37): information that thoe pike train contain. They tate that Dˆ meaure the lo in information that reult from ignoring correlation in the proce of decoding and, thu, refer to thi meaure a I. Nirenberg and Latham (2003) make a connection between the KL divergence and the encoded information by uing an argument about the number of ye/no quetion one mut ak to pecify the timulu (ee below). Although thi argument may initially eem reaonable, cloer conideration reveal that it i flawed. Thi can be demontrated by the direct contradiction that reult from auming that the KL divergence meaure an information lo, a well a the contradictory implication of thi argument. In particular, there are ituation in which thi putative information lo can be greater than the amount of information preent. Furthermore, interpreting the meaure Dˆ a a general tet of the importance of noie correlation for encoding information about a timulu i problematic, becaue of the highly counterintuitive reult that one find when applying the meaure to toy model. Contradiction. The central claim made by Nirenberg et al. (2001) i that Dˆ meaure the amount of information about the timulu that i lot when one ignore noie correlation. If thi were true, then the information that uch a decoder can capture would be given by: I no noie S; R 1, R 2 I S; R 1, R 2 Dˆ p r 1, r 2 p r 1, r 2 log 2 p huffle r 1, r 2 p (19) Thi expreion for I no noie i unuual. It doe not obviouly have the form of a mutual information, a i evident from the fact that the probability ditribution inide the logarithm i not the ame a that multiplying the logarithm. The fact that Equation 19 i not a mutual information term can be demontrated by pecific example. Figure 4 how one uch cae for a pair of model neuron that can generate three different repone (0, 1, or 2 pike) to each of three, equally likely timuli. The joint repone ditribution, p(r 1, r 2 ) i hown in Figure 4A. For thi toy model, Dˆ exceed the total information encoded by both neuron, and, conequently, Equation 19 i negative. Thi example demontrate that if one aume that Dˆ i an information lo, then one would ometime loe more information than wa preent by ignoring noie correlation. Becaue the mutual information between the output of a decoder and the input timulu cannot be negative, thi i a clear contradiction. Therefore, Dˆ i not an information lo. Counter-intuitive propertie of Dˆ. Becaue Dˆ i alway poitive, one might wonder whether it et a ueful upper bound on the importance of noie correlation. Again rewriting: where: Dˆ p p r 1, r 2 log p r 1, r 2 p r 1 p r 2 p r 1, r 2 log p r 1, r 2 p huffle r 1, r 2, (20) p huffle r 1, r 2 p p r 1 p r 2. (21) Figure 4. Dˆ can be larger than the information that the cell encode about the timulu. A, The conditional joint repone ditribution p(r 1, r 2 ), of two neuron reponding to three timuli. Each of the neuron repond with either zero, one, or two pike. p(r 1, r 2 ) i the average of p(r 1, r 2 ) over the timuli. The a priori probability of each of the timuli equal 1/3. B, The conditional timulu ditribution for the cell pair, p( r 1, r 2 ), obtained uing Baye rule. C, The conditional timulu ditribution that aume no noie correlation, p huffle ( r 1,r 2 ), obtained by inverting p(r 1 )p(r 2 ) uing Baye rule; ee text for detail. For thi cae, the information that both cell carry about the timulu, I(R 1, R 2 ; S) equal bit, wherea Dˆ equal bit. We ee that both term are non-negative, becaue they both have the form of a KL divergence. The firt term, in fact, i I(R 1 ;R 2 ), which i a meaure of the trength of noie correlation and an upper bound on the ynergy. Becaue the econd term i non-negative, Dˆ I(R 1 ; R 2 ). Therefore, Dˆ doe not contitute an upper bound on the importance of noie correlation a i alo demontrated by pecific example in Figure 5. Even o, perhap Dˆ contitute a tighter upper bound on the ynergy than I(R 1 ; R 2 )? Thi turn out not to be the cae, a hown below. In Figure 5, we imagine a imple ituation in which a pair of neuron can only generate two repone, pike or no pike, and they are only expoed to two different, equally likely timuli. In both of thee example, the neuron fire a pike with p 0.5 for the firt timulu, but neither fire a pike for the econd timulu. A uch, they are pare in a manner imilar to many real neuron. In example A, the repone to the firt timulu i perfectly anticorrelated, meaning that if one cell fire a pike, the other tay ilent, and vice vera. Knowledge of thi noie correlation reolve any ambiguity about the timulu, uch that the joint mutual information i one bit. Becaue each cell motly remain ilent in thi timulu enemble, the individual mutual information of each cell i coniderably lower, and the ynergy of the cell pair i

8 11546 J. Neuroci., December 17, (37): Schneidman et al. Synergy, Redundancy, and Independence in Population Code Figure 6. Cell may be ynergitic but Dˆ 0. The conditional joint repone ditribution p(r 1, r 2 ) of two neuron reponding to three timuli, each with probability 1/3. In thi cae, the cell are ynergitic but Dˆ i zero: I(R 1, R 2 ; S) bit; Syn(R 1, R 2 ) bit; Dˆ 0 bit. Figure 5. Example of counter-intuitive value of Dˆ. For both example, there are two timuli and two neural repone. The probability of each timulu i 1/2. A, One conditional joint repone ditribution, p(r 1, r 2 ), which reult in the ynergy of the cell being larger than Dˆ; I(R 1, R 2, S) 1 bit; Syn(R 1, R 2 ) bit; Dˆ bit. B, Another conditional joint repone ditribution, p(r 1, r 2 ), which reult in Dˆ being larger than zero when the noie correlation contribute net redundancy; I(R 1, R 2 ; S) bit; Syn(R 1, R 2 ) bit; Dˆ bit bit. Uing Baye rule to find the real conditional timulu ditribution and the one that ignore noie correlation, one find that Dˆ bit, or about 2.3 time maller than the ynergy. Thi i a trange reult, becaue ynergy can only arie from noie correlation. Thu, one naively expect that all ynergy i lot when one ignore the noie correlation. Conitent with thi expectation, the upper bound on the ynergy, I(R 1 ; R 2 ) i 0.5 bit, and information lot by uing huffled pike train i I noie bit. In example B, the two cell have a complete poitive correlation for the firt timulu, meaning that they either both fire a pike or both remain ilent, each with p 0.5. A before, they both remain ilent for the econd timulu. In thi cae, the two neuron alway have exactly the ame repone. A a reult, the ynergy equal bit, which i a redundancy of 100%. However, they till have quite trong noie correlation, and Dˆ bit or 16.9% of the joint mutual information, which i virtually the ame fraction a in example A. Thi comparion indicate that Dˆ cannot ditinguih between noie correlation that lead to redundancy and thoe that lead to ynergy. In thi example, huffling the pike train break the complete redundancy of the two cell and actually increae the encoded information. Correpondingly, I noie bit or 76% of the joint mutual information (negative value imply that a huffled et of repone would have more information than the original pike train). Figure 6 how an example with three timuli and neuron capable of three repone. Here, the neuron have anticorrelation that allow all three timuli to be perfectly reolved, and the ynergy equal bit or 26.2% of the joint mutual information. However, the correlation between thee cell are uch that Dˆ 0. Interetingly, p huffle ( r 1, r 2 ) i not equal to p( r 1, r 2 ) for all joint repone, but in all cae in which they are unequal, the joint repone probability p(r 1, r 2 ) 0 (for related example and dicuion, ee Meiter and Hooya, 2001). Thi example i an extreme illutration, in which the meaure Dˆ implie that there i no cot to ignoring noie correlation, when, in fact, oberving the repone of the two cell together provide ubtantially more information about the timulu than expected from obervation on the individual neuron in iolation. Clearly, the meaure Dˆ cannot be relied on to detect the impact of intereting and important noie correlation on the neural code. Problematic implication of Dˆ. Although Nirenberg and Latham (2003) do not attempt to explore all of the conequence of interpreting their KL divergence a a general meaure of information lo, we how here that thi argument lead to further contradiction. One corollary of their claim come from it extenion to cae other than aeing the impact of ignoring noie correlation (Nirenberg and Latham, 2003). Hence, one can alo ak how much information i lot by a decoder built from any approximate verion p ( r 1, r 2 ) of the conditional timulu ditribution, and the anwer, if we follow the argument of Nirenberg and Latham (2003), mut be D KL [ p( r 1, r 2 ) p ( r 1, r 2 )]. However, in general, the KL divergence between p( r 1, r 2 ) and p ( r 1, r 2 ) can be infinite, if for ome value of, r 1 and r 2, p 0 and p 0. Thi reult i clearly impoible to interpret. Another corollary i that the information lo reulting from ignoring noie correlation i defined for every joint repone (r 1,r 2 ). Thi mean that we can alo ue the formalim to determine how much information the decoder loe when acting on the huffled pike train. Thi expreion i: Dˆ p huffle r 1, r 2 D KL p r 1, r 2 p huffle r 1, r 2, (22) However, we have hown above that the mutual information that a pair of neuron convey about the timulu under the aumption of conditional independence i I huffle and the conequent difference in mutual information i I noie. Equation 22 i not identical to I noie. In particular, I noie can be either poitive or negative, becaue the aumption of conditional independence ometime implie a gain of information rather than a lo (Abbott and Dayan, 1999). Thi typically occur when the neuron have poitive correlation (Fig. 5, example B) (Peteren et al., 2001). In thi cae, huffling the pike train actually reduce their joint noie and, therefore, can increae the information conveyed about the timulu. In contrat, Equation 22 i never negative, implying that there i alway a lo of information. Thu, another contradiction reult. What doe Dˆ meaure? Nirenberg et al. (2001) argue that their average KL divergence meaure the number of additional ye/no quetion that mut be aked to determine the timulu when a decoder ue the dictionary p huffle ( r 1, r 2 ) intead of p( r 1, r 2 ). They identify thi number of ye/no quetion with a

9 Schneidman et al. Synergy, Redundancy, and Independence in Population Code J. Neuroci., December 17, (37): lo in mutual information about the timulu. However, thi identification i mitaken. The KL divergence i not equivalent to entropy or an entropy difference (Cover and Thoma, 1991). Any information theoretic quantity that ha unit of bit can, intuitively, be thought of a repreenting the number of ye/no quetion needed to pecify it random variable. However, thi doe not imply that all uch quantitie are equivalent. For intance, both the entropy and the mutual information have unit of bit. However, they are conceptually different: a neuron firing randomly at a high rate ha lot of entropy but no information, wherea a neuron firing at a low rate, but locked preciely to a timulu, ha le entropy but more information. The precie information theoretic interpretation of the KL divergence come in the context of coding theory (Cover and Thoma, 1991). If ignal x are choen from a probability ditribution p( x), then there exit a way of repreenting thee ignal in binary form uch that the average code word ha length equal to the entropy of the ditribution. Each binary digit of thi code correpond to a ye/no quetion that mut be aked about the value of x, and, hence, the code length can be thought of a repreenting the total number of ye/no quetion that mut be anwered, on average, to determine the value of x. Achieving thi optimal code require a trategy matched to the ditribution p( x) itelf; in particular, the code length for each value x hould be choen to be log 2 p( x). The KL divergence between two ditribution, p( x) and q( x) and D KL [ p( x) q( x)], meaure the average extra length of code word for ignal x drawn from p( x) uing a code that wa optimized for q( x). It i not an information lo in any ene. Intead, one might think of D KL a meauring a form of coding inefficiency. In the preent context, however, thi lo of coding efficiency doe not refer to the code of the neuron but, rather, to a nonoptimal code that would be contructed by a hypothetical oberver for the conditional timulu ditribution p( r 1, r 2 ). The KL divergence i commonly ued in the literature imply a a meaure that quantifie the difference between two probability ditribution, without reference to it precie information theoretic interpretation. In thi ene, Dˆ i a enible meaure of the (di)imilarity between p huffle ( r 1, r 2 ) and p( r 1, r 2 ), but it doe not ae how much information about the timulu can be obtained by uing one ditribution or the other. Moreover, a a general meaure of the diimilarity of probability ditribution, the KL divergence i one of everal common choice. Other enible meaure include the L 2 norm and the Jenen Shannon divergence. Each of thee meaure i the anwer to a pecific quetion about the diimilarity of two ditribution. Becaue Dˆ i a KL divergence between approximate and real decoding dictionarie and becaue it cannot be interpreted a a lo of encoded information, thi quantity hould be thought of a a meaure related to the problem of decoding and not to the problem of encoding. One important conequence of thi ditinction i that one cannot reach very general concluion uing any decoding-related meaure. A noted above, there are many poible decoding algorithm, and the ucce of any algorithm i dependent on the choice of an error meaure. Thu, the concluion one reache about the problem of decoding mut alway be pecific to a given decoding algorithm and a particular error meaure. In the cae of Dˆ, one i implicitly auming that the decoding dictionary i repreented by a code book that i optimized for p huffle ( r 1, r 2 ). Thi i not the only poible code book that ignore noie correlation. Another poibility i to ue one optimized for p(), which completely ignore the neural repone altogether. Thi counter-intuitive choice doe explicitly ignore the noie correlation and, in ome circumtance (e.g., the example in Fig. 4), it actually i more efficient than the one optimized for p huffle ( r 1, r 2 ). Another ource of confuion i that Dˆ i expreed in unit of bit, rather than recontruction error. Thi i highly mileading, becaue the encoded information i alo expreed in bit. Although the encoded information provide a completely general bound concerning the performance of any poible decoder, it i important to keep in mind that Dˆ doe not have thi level of generality, depite it uggetive unit. What doe it mean to ignore noie correlation? The mot obviou ene in which one can ignore noie correlation i to combine pike train from two different timulu trial. A decribed above, huffling the pike train change the joint repone ditribution p(r 1, r 2 ) into p huffle (r 1, r 2 ) (Eq. 12) and conequently change the probability of finding any joint repone to p huffle (r 1, r 2 ) (ee Eq. 21). Finally, the information that the huffled pike train encode about the timulu i I huffle (S; R 1, R 2 ) (Eq. 13). However, the meaure Dˆ refer to a different circumtance: it aume a decoding dictionary that ignore noie correlation but combine thi with the real, correlated pike train. For ome purpoe, thi may be an intereting cenario. If Dˆ doe not ae the impact of thi aumption on the information encoded about the timulu, then what i the anwer to thi quetion? In general, thi quetion i ill-defined. The obviou approach i to contruct the new joint probability ditribution q(, r 1, r 2 ) p huffle ( r 1, r 2 ) p(r 1, r 2 ) which combine a decoding dictionary that ignore noie correlation with the real, correlated pike train. Then, the mutual information between timulu and repone under the joint ditribution q i given by: I q S; R 1, R 2, q, r 1, r 2 q, r 1, r 2 log 2 q q r 1, r 2, (23) where q(r 1, r 2 ) q(, r 1, r 2 ) and q() r1,r 2 q(, r 1, r 2 ). However, thi cenario i trange, becaue imultaneouly auming p huffle ( r 1, r 2 ) and p(r 1, r 2 ) implie (through Baye rule) that the ditribution over the timuli q() i different from the original p(). It i alo worth noting that thi formalim can be extended to the cae of auming any approximate decoding dictionary, p ( r 1, r 2 ), by again forming the joint ditribution, q (, r 1, r 2 ) p ( r 1, r 2 ) p(r 1, r 2 ). Similarly, a different ditribution over the joint repone, p (r 1, r 2 ), can be inerted. However, the ditribution over timuli p () will, in general, not be equal to the actual ditribution, p(). Thi can lead to contradictory reult; for intance, the apparent mutual information can exceed the original timulu entropy, I q H(S), becaue the new ditribution over timuli p () might have larger entropy than p(). Nirenberg and Latham (2003) dicu a pecial cae of comparing two neural code, in which one code i a reduced code or ubet of the firt (Nirenberg and Latham, 2003). One example of a reduced neural code would be a code that count pike in a large time window veru one that keep many detail of pike timing by contructing word uing pike count in a maller time bin (Strong et al., 1998). In thi cae, the joint repone of the reduced code, r, can alway be found from the joint repone of the full code, r, by a determinitic function, r F[r]. Becaue R i a reduced code, it alway convey le information about the timulu than the full code: I(S; R ) I(S; R). Thi difference in

Social Studies 201 Notes for March 18, 2005

Social Studies 201 Notes for March 18, 2005 1 Social Studie 201 Note for March 18, 2005 Etimation of a mean, mall ample ize Section 8.4, p. 501. When a reearcher ha only a mall ample ize available, the central limit theorem doe not apply to the

More information

Social Studies 201 Notes for November 14, 2003

Social Studies 201 Notes for November 14, 2003 1 Social Studie 201 Note for November 14, 2003 Etimation of a mean, mall ample ize Section 8.4, p. 501. When a reearcher ha only a mall ample ize available, the central limit theorem doe not apply to the

More information

Source slideplayer.com/fundamentals of Analytical Chemistry, F.J. Holler, S.R.Crouch. Chapter 6: Random Errors in Chemical Analysis

Source slideplayer.com/fundamentals of Analytical Chemistry, F.J. Holler, S.R.Crouch. Chapter 6: Random Errors in Chemical Analysis Source lideplayer.com/fundamental of Analytical Chemitry, F.J. Holler, S.R.Crouch Chapter 6: Random Error in Chemical Analyi Random error are preent in every meaurement no matter how careful the experimenter.

More information

Comparing Means: t-tests for Two Independent Samples

Comparing Means: t-tests for Two Independent Samples Comparing ean: t-tet for Two Independent Sample Independent-eaure Deign t-tet for Two Independent Sample Allow reearcher to evaluate the mean difference between two population uing data from two eparate

More information

Lecture 21. The Lovasz splitting-off lemma Topics in Combinatorial Optimization April 29th, 2004

Lecture 21. The Lovasz splitting-off lemma Topics in Combinatorial Optimization April 29th, 2004 18.997 Topic in Combinatorial Optimization April 29th, 2004 Lecture 21 Lecturer: Michel X. Goeman Scribe: Mohammad Mahdian 1 The Lovaz plitting-off lemma Lovaz plitting-off lemma tate the following. Theorem

More information

7.2 INVERSE TRANSFORMS AND TRANSFORMS OF DERIVATIVES 281

7.2 INVERSE TRANSFORMS AND TRANSFORMS OF DERIVATIVES 281 72 INVERSE TRANSFORMS AND TRANSFORMS OF DERIVATIVES 28 and i 2 Show how Euler formula (page 33) can then be ued to deduce the reult a ( a) 2 b 2 {e at co bt} {e at in bt} b ( a) 2 b 2 5 Under what condition

More information

Dimensional Analysis A Tool for Guiding Mathematical Calculations

Dimensional Analysis A Tool for Guiding Mathematical Calculations Dimenional Analyi A Tool for Guiding Mathematical Calculation Dougla A. Kerr Iue 1 February 6, 2010 ABSTRACT AND INTRODUCTION In converting quantitie from one unit to another, we may know the applicable

More information

Lecture 7: Testing Distributions

Lecture 7: Testing Distributions CSE 5: Sublinear (and Streaming) Algorithm Spring 014 Lecture 7: Teting Ditribution April 1, 014 Lecturer: Paul Beame Scribe: Paul Beame 1 Teting Uniformity of Ditribution We return today to property teting

More information

Alternate Dispersion Measures in Replicated Factorial Experiments

Alternate Dispersion Measures in Replicated Factorial Experiments Alternate Diperion Meaure in Replicated Factorial Experiment Neal A. Mackertich The Raytheon Company, Sudbury MA 02421 Jame C. Benneyan Northeatern Univerity, Boton MA 02115 Peter D. Krau The Raytheon

More information

Clustering Methods without Given Number of Clusters

Clustering Methods without Given Number of Clusters Clutering Method without Given Number of Cluter Peng Xu, Fei Liu Introduction A we now, mean method i a very effective algorithm of clutering. It mot powerful feature i the calability and implicity. However,

More information

(b) Is the game below solvable by iterated strict dominance? Does it have a unique Nash equilibrium?

(b) Is the game below solvable by iterated strict dominance? Does it have a unique Nash equilibrium? 14.1 Final Exam Anwer all quetion. You have 3 hour in which to complete the exam. 1. (60 Minute 40 Point) Anwer each of the following ubquetion briefly. Pleae how your calculation and provide rough explanation

More information

ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION. Xiaoqun Wang

ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION. Xiaoqun Wang Proceeding of the 2008 Winter Simulation Conference S. J. Maon, R. R. Hill, L. Mönch, O. Roe, T. Jefferon, J. W. Fowler ed. ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION Xiaoqun Wang

More information

Analytical estimates of limited sampling biases in different information measures

Analytical estimates of limited sampling biases in different information measures Network: Computation in Neural Sytem 7 (996) 87 07. Printed in the UK Analytical etimate of limited ampling biae in different information meaure Stefano Panzeri and Aleandro Treve Biophyic, SISSA, via

More information

Bogoliubov Transformation in Classical Mechanics

Bogoliubov Transformation in Classical Mechanics Bogoliubov Tranformation in Claical Mechanic Canonical Tranformation Suppoe we have a et of complex canonical variable, {a j }, and would like to conider another et of variable, {b }, b b ({a j }). How

More information

SMALL-SIGNAL STABILITY ASSESSMENT OF THE EUROPEAN POWER SYSTEM BASED ON ADVANCED NEURAL NETWORK METHOD

SMALL-SIGNAL STABILITY ASSESSMENT OF THE EUROPEAN POWER SYSTEM BASED ON ADVANCED NEURAL NETWORK METHOD SMALL-SIGNAL STABILITY ASSESSMENT OF THE EUROPEAN POWER SYSTEM BASED ON ADVANCED NEURAL NETWORK METHOD S.P. Teeuwen, I. Erlich U. Bachmann Univerity of Duiburg, Germany Department of Electrical Power Sytem

More information

Lecture 17: Analytic Functions and Integrals (See Chapter 14 in Boas)

Lecture 17: Analytic Functions and Integrals (See Chapter 14 in Boas) Lecture 7: Analytic Function and Integral (See Chapter 4 in Boa) Thi i a good point to take a brief detour and expand on our previou dicuion of complex variable and complex function of complex variable.

More information

Optimal Coordination of Samples in Business Surveys

Optimal Coordination of Samples in Business Surveys Paper preented at the ICES-III, June 8-, 007, Montreal, Quebec, Canada Optimal Coordination of Sample in Buine Survey enka Mach, Ioana Şchiopu-Kratina, Philip T Rei, Jean-Marc Fillion Statitic Canada New

More information

Assignment for Mathematics for Economists Fall 2016

Assignment for Mathematics for Economists Fall 2016 Due date: Mon. Nov. 1. Reading: CSZ, Ch. 5, Ch. 8.1 Aignment for Mathematic for Economit Fall 016 We now turn to finihing our coverage of concavity/convexity. There are two part: Jenen inequality for concave/convex

More information

Codes Correcting Two Deletions

Codes Correcting Two Deletions 1 Code Correcting Two Deletion Ryan Gabry and Frederic Sala Spawar Sytem Center Univerity of California, Lo Angele ryan.gabry@navy.mil fredala@ucla.edu Abtract In thi work, we invetigate the problem of

More information

Control Systems Analysis and Design by the Root-Locus Method

Control Systems Analysis and Design by the Root-Locus Method 6 Control Sytem Analyi and Deign by the Root-Locu Method 6 1 INTRODUCTION The baic characteritic of the tranient repone of a cloed-loop ytem i cloely related to the location of the cloed-loop pole. If

More information

Preemptive scheduling on a small number of hierarchical machines

Preemptive scheduling on a small number of hierarchical machines Available online at www.ciencedirect.com Information and Computation 06 (008) 60 619 www.elevier.com/locate/ic Preemptive cheduling on a mall number of hierarchical machine György Dóa a, Leah Eptein b,

More information

Chapter 2 Sampling and Quantization. In order to investigate sampling and quantization, the difference between analog

Chapter 2 Sampling and Quantization. In order to investigate sampling and quantization, the difference between analog Chapter Sampling and Quantization.1 Analog and Digital Signal In order to invetigate ampling and quantization, the difference between analog and digital ignal mut be undertood. Analog ignal conit of continuou

More information

Suggested Answers To Exercises. estimates variability in a sampling distribution of random means. About 68% of means fall

Suggested Answers To Exercises. estimates variability in a sampling distribution of random means. About 68% of means fall Beyond Significance Teting ( nd Edition), Rex B. Kline Suggeted Anwer To Exercie Chapter. The tatitic meaure variability among core at the cae level. In a normal ditribution, about 68% of the core fall

More information

P [r s] Encoding problem: 1. Encoding (continued) encoding D&A ch.1. readings: 2. Describing the noise

P [r s] Encoding problem: 1. Encoding (continued) encoding D&A ch.1. readings: 2. Describing the noise Encoding problem: P [r ] What i the relationhip Adaptation between timuli in the world and the activity of the brain? State reading: 1. Encoding (continued) encoding D&A ch.1 Population Repone Fixed Encoder

More information

IEOR 3106: Fall 2013, Professor Whitt Topics for Discussion: Tuesday, November 19 Alternating Renewal Processes and The Renewal Equation

IEOR 3106: Fall 2013, Professor Whitt Topics for Discussion: Tuesday, November 19 Alternating Renewal Processes and The Renewal Equation IEOR 316: Fall 213, Profeor Whitt Topic for Dicuion: Tueday, November 19 Alternating Renewal Procee and The Renewal Equation 1 Alternating Renewal Procee An alternating renewal proce alternate between

More information

UNIT 15 RELIABILITY EVALUATION OF k-out-of-n AND STANDBY SYSTEMS

UNIT 15 RELIABILITY EVALUATION OF k-out-of-n AND STANDBY SYSTEMS UNIT 1 RELIABILITY EVALUATION OF k-out-of-n AND STANDBY SYSTEMS Structure 1.1 Introduction Objective 1.2 Redundancy 1.3 Reliability of k-out-of-n Sytem 1.4 Reliability of Standby Sytem 1. Summary 1.6 Solution/Anwer

More information

Quantitative Information Leakage. Lecture 9

Quantitative Information Leakage. Lecture 9 Quantitative Information Leakage Lecture 9 1 The baic model: Sytem = Information-Theoretic channel Secret Information Obervable 1 o1... Sytem... m on Input Output 2 Toward a quantitative notion of leakage

More information

Lecture 8: Period Finding: Simon s Problem over Z N

Lecture 8: Period Finding: Simon s Problem over Z N Quantum Computation (CMU 8-859BB, Fall 205) Lecture 8: Period Finding: Simon Problem over Z October 5, 205 Lecturer: John Wright Scribe: icola Rech Problem A mentioned previouly, period finding i a rephraing

More information

Factor Analysis with Poisson Output

Factor Analysis with Poisson Output Factor Analyi with Poion Output Gopal Santhanam Byron Yu Krihna V. Shenoy, Department of Electrical Engineering, Neurocience Program Stanford Univerity Stanford, CA 94305, USA {gopal,byronyu,henoy}@tanford.edu

More information

Problem Set 8 Solutions

Problem Set 8 Solutions Deign and Analyi of Algorithm April 29, 2015 Maachuett Intitute of Technology 6.046J/18.410J Prof. Erik Demaine, Srini Devada, and Nancy Lynch Problem Set 8 Solution Problem Set 8 Solution Thi problem

More information

Singular perturbation theory

Singular perturbation theory Singular perturbation theory Marc R. Rouel June 21, 2004 1 Introduction When we apply the teady-tate approximation (SSA) in chemical kinetic, we typically argue that ome of the intermediate are highly

More information

Lecture 10 Filtering: Applied Concepts

Lecture 10 Filtering: Applied Concepts Lecture Filtering: Applied Concept In the previou two lecture, you have learned about finite-impule-repone (FIR) and infinite-impule-repone (IIR) filter. In thee lecture, we introduced the concept of filtering

More information

( ) ( Statistical Equivalence Testing

( ) ( Statistical Equivalence Testing ( Downloaded via 148.51.3.83 on November 1, 018 at 13:8: (UTC). See http://pub.ac.org/haringguideline for option on how to legitimately hare publihed article. 0 BEYOND Gielle B. Limentani Moira C. Ringo

More information

Lecture 4 Topic 3: General linear models (GLMs), the fundamentals of the analysis of variance (ANOVA), and completely randomized designs (CRDs)

Lecture 4 Topic 3: General linear models (GLMs), the fundamentals of the analysis of variance (ANOVA), and completely randomized designs (CRDs) Lecture 4 Topic 3: General linear model (GLM), the fundamental of the analyi of variance (ANOVA), and completely randomized deign (CRD) The general linear model One population: An obervation i explained

More information

Asymptotics of ABC. Paul Fearnhead 1, Correspondence: Abstract

Asymptotics of ABC. Paul Fearnhead 1, Correspondence: Abstract Aymptotic of ABC Paul Fearnhead 1, 1 Department of Mathematic and Statitic, Lancater Univerity Correpondence: p.fearnhead@lancater.ac.uk arxiv:1706.07712v1 [tat.me] 23 Jun 2017 Abtract Thi document i due

More information

NCAAPMT Calculus Challenge Challenge #3 Due: October 26, 2011

NCAAPMT Calculus Challenge Challenge #3 Due: October 26, 2011 NCAAPMT Calculu Challenge 011 01 Challenge #3 Due: October 6, 011 A Model of Traffic Flow Everyone ha at ome time been on a multi-lane highway and encountered road contruction that required the traffic

More information

A Bluffer s Guide to... Sphericity

A Bluffer s Guide to... Sphericity A Bluffer Guide to Sphericity Andy Field Univerity of Suex The ue of repeated meaure, where the ame ubject are teted under a number of condition, ha numerou practical and tatitical benefit. For one thing

More information

μ + = σ = D 4 σ = D 3 σ = σ = All units in parts (a) and (b) are in V. (1) x chart: Center = μ = 0.75 UCL =

μ + = σ = D 4 σ = D 3 σ = σ = All units in parts (a) and (b) are in V. (1) x chart: Center = μ = 0.75 UCL = Our online Tutor are available 4*7 to provide Help with Proce control ytem Homework/Aignment or a long term Graduate/Undergraduate Proce control ytem Project. Our Tutor being experienced and proficient

More information

Approximating discrete probability distributions with Bayesian networks

Approximating discrete probability distributions with Bayesian networks Approximating dicrete probability ditribution with Bayeian network Jon Williamon Department of Philoophy King College, Str and, London, WC2R 2LS, UK Abtract I generalie the argument of [Chow & Liu 1968]

More information

Standard Guide for Conducting Ruggedness Tests 1

Standard Guide for Conducting Ruggedness Tests 1 Deignation: E 69 89 (Reapproved 996) Standard Guide for Conducting Ruggedne Tet AMERICA SOCIETY FOR TESTIG AD MATERIALS 00 Barr Harbor Dr., Wet Conhohocken, PA 948 Reprinted from the Annual Book of ASTM

More information

Gain and Phase Margins Based Delay Dependent Stability Analysis of Two- Area LFC System with Communication Delays

Gain and Phase Margins Based Delay Dependent Stability Analysis of Two- Area LFC System with Communication Delays Gain and Phae Margin Baed Delay Dependent Stability Analyi of Two- Area LFC Sytem with Communication Delay Şahin Sönmez and Saffet Ayaun Department of Electrical Engineering, Niğde Ömer Halidemir Univerity,

More information

Jul 4, 2005 turbo_code_primer Revision 0.0. Turbo Code Primer

Jul 4, 2005 turbo_code_primer Revision 0.0. Turbo Code Primer Jul 4, 5 turbo_code_primer Reviion. Turbo Code Primer. Introduction Thi document give a quick tutorial on MAP baed turbo coder. Section develop the background theory. Section work through a imple numerical

More information

The Use of MDL to Select among Computational Models of Cognition

The Use of MDL to Select among Computational Models of Cognition The Ue of DL to Select among Computational odel of Cognition In J. yung, ark A. Pitt & Shaobo Zhang Vijay Balaubramanian Department of Pychology David Rittenhoue Laboratorie Ohio State Univerity Univerity

More information

1. The F-test for Equality of Two Variances

1. The F-test for Equality of Two Variances . The F-tet for Equality of Two Variance Previouly we've learned how to tet whether two population mean are equal, uing data from two independent ample. We can alo tet whether two population variance are

More information

A Constraint Propagation Algorithm for Determining the Stability Margin. The paper addresses the stability margin assessment for linear systems

A Constraint Propagation Algorithm for Determining the Stability Margin. The paper addresses the stability margin assessment for linear systems A Contraint Propagation Algorithm for Determining the Stability Margin of Linear Parameter Circuit and Sytem Lubomir Kolev and Simona Filipova-Petrakieva Abtract The paper addree the tability margin aement

More information

Finite Element Analysis of a Fiber Bragg Grating Accelerometer for Performance Optimization

Finite Element Analysis of a Fiber Bragg Grating Accelerometer for Performance Optimization Finite Element Analyi of a Fiber Bragg Grating Accelerometer for Performance Optimization N. Baumallick*, P. Biwa, K. Dagupta and S. Bandyopadhyay Fiber Optic Laboratory, Central Gla and Ceramic Reearch

More information

arxiv: v1 [math.mg] 25 Aug 2011

arxiv: v1 [math.mg] 25 Aug 2011 ABSORBING ANGLES, STEINER MINIMAL TREES, AND ANTIPODALITY HORST MARTINI, KONRAD J. SWANEPOEL, AND P. OLOFF DE WET arxiv:08.5046v [math.mg] 25 Aug 20 Abtract. We give a new proof that a tar {op i : i =,...,

More information

GNSS Solutions: What is the carrier phase measurement? How is it generated in GNSS receivers? Simply put, the carrier phase

GNSS Solutions: What is the carrier phase measurement? How is it generated in GNSS receivers? Simply put, the carrier phase GNSS Solution: Carrier phae and it meaurement for GNSS GNSS Solution i a regular column featuring quetion and anwer about technical apect of GNSS. Reader are invited to end their quetion to the columnit,

More information

Design By Emulation (Indirect Method)

Design By Emulation (Indirect Method) Deign By Emulation (Indirect Method he baic trategy here i, that Given a continuou tranfer function, it i required to find the bet dicrete equivalent uch that the ignal produced by paing an input ignal

More information

Avoiding Forbidden Submatrices by Row Deletions

Avoiding Forbidden Submatrices by Row Deletions Avoiding Forbidden Submatrice by Row Deletion Sebatian Wernicke, Jochen Alber, Jen Gramm, Jiong Guo, and Rolf Niedermeier Wilhelm-Schickard-Intitut für Informatik, niverität Tübingen, Sand 13, D-72076

More information

By Xiaoquan Wen and Matthew Stephens University of Michigan and University of Chicago

By Xiaoquan Wen and Matthew Stephens University of Michigan and University of Chicago Submitted to the Annal of Applied Statitic SUPPLEMENTARY APPENDIX TO BAYESIAN METHODS FOR GENETIC ASSOCIATION ANALYSIS WITH HETEROGENEOUS SUBGROUPS: FROM META-ANALYSES TO GENE-ENVIRONMENT INTERACTIONS

More information

III.9. THE HYSTERESIS CYCLE OF FERROELECTRIC SUBSTANCES

III.9. THE HYSTERESIS CYCLE OF FERROELECTRIC SUBSTANCES III.9. THE HYSTERESIS CYCLE OF FERROELECTRIC SBSTANCES. Work purpoe The analyi of the behaviour of a ferroelectric ubtance placed in an eternal electric field; the dependence of the electrical polariation

More information

Math Skills. Scientific Notation. Uncertainty in Measurements. Appendix A5 SKILLS HANDBOOK

Math Skills. Scientific Notation. Uncertainty in Measurements. Appendix A5 SKILLS HANDBOOK ppendix 5 Scientific Notation It i difficult to work with very large or very mall number when they are written in common decimal notation. Uually it i poible to accommodate uch number by changing the SI

More information

Z a>2 s 1n = X L - m. X L = m + Z a>2 s 1n X L = The decision rule for this one-tail test is

Z a>2 s 1n = X L - m. X L = m + Z a>2 s 1n X L = The decision rule for this one-tail test is M09_BERE8380_12_OM_C09.QD 2/21/11 3:44 PM Page 1 9.6 The Power of a Tet 9.6 The Power of a Tet 1 Section 9.1 defined Type I and Type II error and their aociated rik. Recall that a repreent the probability

More information

DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS DIFFERENTIAL EQUATIONS Laplace Tranform Paul Dawkin Table of Content Preface... Laplace Tranform... Introduction... The Definition... 5 Laplace Tranform... 9 Invere Laplace Tranform... Step Function...4

More information

Chapter 4. The Laplace Transform Method

Chapter 4. The Laplace Transform Method Chapter 4. The Laplace Tranform Method The Laplace Tranform i a tranformation, meaning that it change a function into a new function. Actually, it i a linear tranformation, becaue it convert a linear combination

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VIII Decoupling Control - M. Fikar

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. VIII Decoupling Control - M. Fikar DECOUPLING CONTROL M. Fikar Department of Proce Control, Faculty of Chemical and Food Technology, Slovak Univerity of Technology in Bratilava, Radlinkého 9, SK-812 37 Bratilava, Slovakia Keyword: Decoupling:

More information

Advanced D-Partitioning Analysis and its Comparison with the Kharitonov s Theorem Assessment

Advanced D-Partitioning Analysis and its Comparison with the Kharitonov s Theorem Assessment Journal of Multidiciplinary Engineering Science and Technology (JMEST) ISSN: 59- Vol. Iue, January - 5 Advanced D-Partitioning Analyi and it Comparion with the haritonov Theorem Aement amen M. Yanev Profeor,

More information

Improving the Efficiency of a Digital Filtering Scheme for Diabatic Initialization

Improving the Efficiency of a Digital Filtering Scheme for Diabatic Initialization 1976 MONTHLY WEATHER REVIEW VOLUME 15 Improving the Efficiency of a Digital Filtering Scheme for Diabatic Initialization PETER LYNCH Met Éireann, Dublin, Ireland DOMINIQUE GIARD CNRM/GMAP, Météo-France,

More information

The variance theory of the mirror effect in recognition memory

The variance theory of the mirror effect in recognition memory Pychonomic Bulletin & Review 001, 8 (3), 408-438 The variance theory of the mirror effect in recognition memory SVERKER SIKSTRÖM Stockholm Univerity, Stockholm, Sweden The mirror effect refer to a rather

More information

Recent progress in fire-structure analysis

Recent progress in fire-structure analysis EJSE Special Iue: Selected Key Note paper from MDCMS 1 1t International Conference on Modern Deign, Contruction and Maintenance of Structure - Hanoi, Vietnam, December 2007 Recent progre in fire-tructure

More information

ASSESSING EXPECTED ACCURACY OF PROBE VEHICLE TRAVEL TIME REPORTS

ASSESSING EXPECTED ACCURACY OF PROBE VEHICLE TRAVEL TIME REPORTS ASSESSING EXPECTED ACCURACY OF PROBE VEHICLE TRAVEL TIME REPORTS By Bruce Hellinga, 1 P.E., and Liping Fu 2 (Reviewed by the Urban Tranportation Diviion) ABSTRACT: The ue of probe vehicle to provide etimate

More information

Annex-A: RTTOV9 Cloud validation

Annex-A: RTTOV9 Cloud validation RTTOV-91 Science and Validation Plan Annex-A: RTTOV9 Cloud validation Author O Embury C J Merchant The Univerity of Edinburgh Intitute for Atmo. & Environ. Science Crew Building King Building Edinburgh

More information

Chapter 12 Simple Linear Regression

Chapter 12 Simple Linear Regression Chapter 1 Simple Linear Regreion Introduction Exam Score v. Hour Studied Scenario Regreion Analyi ued to quantify the relation between (or more) variable o you can predict the value of one variable baed

More information

CHAPTER 6. Estimation

CHAPTER 6. Estimation CHAPTER 6 Etimation Definition. Statitical inference i the procedure by which we reach a concluion about a population on the bai of information contained in a ample drawn from that population. Definition.

More information

Learning Multiplicative Interactions

Learning Multiplicative Interactions CSC2535 2011 Lecture 6a Learning Multiplicative Interaction Geoffrey Hinton Two different meaning of multiplicative If we take two denity model and multiply together their probability ditribution at each

More information

5. Fuzzy Optimization

5. Fuzzy Optimization 5. Fuzzy Optimization 1. Fuzzine: An Introduction 135 1.1. Fuzzy Memberhip Function 135 1.2. Memberhip Function Operation 136 2. Optimization in Fuzzy Environment 136 3. Fuzzy Set for Water Allocation

More information

If Y is normally Distributed, then and 2 Y Y 10. σ σ

If Y is normally Distributed, then and 2 Y Y 10. σ σ ull Hypothei Significance Teting V. APS 50 Lecture ote. B. Dudek. ot for General Ditribution. Cla Member Uage Only. Chi-Square and F-Ditribution, and Diperion Tet Recall from Chapter 4 material on: ( )

More information

into a discrete time function. Recall that the table of Laplace/z-transforms is constructed by (i) selecting to get

into a discrete time function. Recall that the table of Laplace/z-transforms is constructed by (i) selecting to get Lecture 25 Introduction to Some Matlab c2d Code in Relation to Sampled Sytem here are many way to convert a continuou time function, { h( t) ; t [0, )} into a dicrete time function { h ( k) ; k {0,,, }}

More information

Stochastic Neoclassical Growth Model

Stochastic Neoclassical Growth Model Stochatic Neoclaical Growth Model Michael Bar May 22, 28 Content Introduction 2 2 Stochatic NGM 2 3 Productivity Proce 4 3. Mean........................................ 5 3.2 Variance......................................

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions Stochatic Optimization with Inequality Contraint Uing Simultaneou Perturbation and Penalty Function I-Jeng Wang* and Jame C. Spall** The John Hopkin Univerity Applied Phyic Laboratory 11100 John Hopkin

More information

Physics 741 Graduate Quantum Mechanics 1 Solutions to Final Exam, Fall 2014

Physics 741 Graduate Quantum Mechanics 1 Solutions to Final Exam, Fall 2014 Phyic 7 Graduate Quantum Mechanic Solution to inal Eam all 0 Each quetion i worth 5 point with point for each part marked eparately Some poibly ueful formula appear at the end of the tet In four dimenion

More information

Observing Condensations in Atomic Fermi Gases

Observing Condensations in Atomic Fermi Gases Oberving Condenation in Atomic Fermi Gae (Term Eay for 498ESM, Spring 2004) Ruqing Xu Department of Phyic, UIUC (May 6, 2004) Abtract Oberving condenation in a ga of fermion ha been another intereting

More information

Extending MFM Function Ontology for Representing Separation and Conversion in Process Plant

Extending MFM Function Ontology for Representing Separation and Conversion in Process Plant Downloaded from orbit.dtu.dk on: Oct 05, 2018 Extending MFM Function Ontology for Repreenting Separation and Converion in Proce Plant Zhang, Xinxin; Lind, Morten; Jørgenen, Sten Bay; Wu, Jing; Karnati,

More information

Chip-firing game and a partial Tutte polynomial for Eulerian digraphs

Chip-firing game and a partial Tutte polynomial for Eulerian digraphs Chip-firing game and a partial Tutte polynomial for Eulerian digraph Kévin Perrot Aix Mareille Univerité, CNRS, LIF UMR 7279 3288 Mareille cedex 9, France. kevin.perrot@lif.univ-mr.fr Trung Van Pham Intitut

More information

Reliability Analysis of Embedded System with Different Modes of Failure Emphasizing Reboot Delay

Reliability Analysis of Embedded System with Different Modes of Failure Emphasizing Reboot Delay International Journal of Applied Science and Engineering 3., 4: 449-47 Reliability Analyi of Embedded Sytem with Different Mode of Failure Emphaizing Reboot Delay Deepak Kumar* and S. B. Singh Department

More information

Automatic Control Systems. Part III: Root Locus Technique

Automatic Control Systems. Part III: Root Locus Technique www.pdhcenter.com PDH Coure E40 www.pdhonline.org Automatic Control Sytem Part III: Root Locu Technique By Shih-Min Hu, Ph.D., P.E. Page of 30 www.pdhcenter.com PDH Coure E40 www.pdhonline.org VI. Root

More information

In presenting the dissertation as a partial fulfillment of the requirements for an advanced degree from the Georgia Institute of Technology, I agree

In presenting the dissertation as a partial fulfillment of the requirements for an advanced degree from the Georgia Institute of Technology, I agree In preenting the diertation a a partial fulfillment of the requirement for an advanced degree from the Georgia Intitute of Technology, I agree that the Library of the Intitute hall make it available for

More information

Nonlinear Single-Particle Dynamics in High Energy Accelerators

Nonlinear Single-Particle Dynamics in High Energy Accelerators Nonlinear Single-Particle Dynamic in High Energy Accelerator Part 6: Canonical Perturbation Theory Nonlinear Single-Particle Dynamic in High Energy Accelerator Thi coure conit of eight lecture: 1. Introduction

More information

Asymptotic Values and Expansions for the Correlation Between Different Measures of Spread. Anirban DasGupta. Purdue University, West Lafayette, IN

Asymptotic Values and Expansions for the Correlation Between Different Measures of Spread. Anirban DasGupta. Purdue University, West Lafayette, IN Aymptotic Value and Expanion for the Correlation Between Different Meaure of Spread Anirban DaGupta Purdue Univerity, Wet Lafayette, IN L.R. Haff UCSD, La Jolla, CA May 31, 2003 ABSTRACT For iid ample

More information

Unified Design Method for Flexure and Debonding in FRP Retrofitted RC Beams

Unified Design Method for Flexure and Debonding in FRP Retrofitted RC Beams Unified Deign Method for Flexure and Debonding in FRP Retrofitted RC Beam G.X. Guan, Ph.D. 1 ; and C.J. Burgoyne 2 Abtract Flexural retrofitting of reinforced concrete (RC) beam uing fibre reinforced polymer

More information

EC381/MN308 Probability and Some Statistics. Lecture 7 - Outline. Chapter Cumulative Distribution Function (CDF) Continuous Random Variables

EC381/MN308 Probability and Some Statistics. Lecture 7 - Outline. Chapter Cumulative Distribution Function (CDF) Continuous Random Variables EC38/MN38 Probability and Some Statitic Yanni Pachalidi yannip@bu.edu, http://ionia.bu.edu/ Lecture 7 - Outline. Continuou Random Variable Dept. of Manufacturing Engineering Dept. of Electrical and Computer

More information

Overflow from last lecture: Ewald construction and Brillouin zones Structure factor

Overflow from last lecture: Ewald construction and Brillouin zones Structure factor Lecture 5: Overflow from lat lecture: Ewald contruction and Brillouin zone Structure factor Review Conider direct lattice defined by vector R = u 1 a 1 + u 2 a 2 + u 3 a 3 where u 1, u 2, u 3 are integer

More information

An estimation approach for autotuning of event-based PI control systems

An estimation approach for autotuning of event-based PI control systems Acta de la XXXIX Jornada de Automática, Badajoz, 5-7 de Septiembre de 08 An etimation approach for autotuning of event-baed PI control ytem Joé Sánchez Moreno, María Guinaldo Loada, Sebatián Dormido Departamento

More information

CHAPTER 8 OBSERVER BASED REDUCED ORDER CONTROLLER DESIGN FOR LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS

CHAPTER 8 OBSERVER BASED REDUCED ORDER CONTROLLER DESIGN FOR LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS CHAPTER 8 OBSERVER BASED REDUCED ORDER CONTROLLER DESIGN FOR LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS 8.1 INTRODUCTION 8.2 REDUCED ORDER MODEL DESIGN FOR LINEAR DISCRETE-TIME CONTROL SYSTEMS 8.3

More information

Multicolor Sunflowers

Multicolor Sunflowers Multicolor Sunflower Dhruv Mubayi Lujia Wang October 19, 2017 Abtract A unflower i a collection of ditinct et uch that the interection of any two of them i the ame a the common interection C of all of

More information

Publication V by authors

Publication V by authors Publication Kontantin S. Kotov and Jorma J. Kyyrä. 008. nertion lo and network parameter in the analyi of power filter. n: Proceeding of the 008 Nordic Workhop on Power and ndutrial Electronic (NORPE 008).

More information

Efficient Neural Codes that Minimize L p Reconstruction Error

Efficient Neural Codes that Minimize L p Reconstruction Error 1 Efficient Neural Code that Minimize L p Recontruction Error Zhuo Wang 1, Alan A. Stocker 2, 3, Daniel D. Lee3, 4, 5 1 Department of Mathematic, Univerity of Pennylvania 2 Department of Pychology, Univerity

More information

Reputation and Multiproduct-firm Behavior: Product line and Price Rivalry Among Retailers

Reputation and Multiproduct-firm Behavior: Product line and Price Rivalry Among Retailers Reputation and Multiproduct-firm Behavior: Product line and Price Rivalry Among Retailer Shaoyan Sun and Henry An Department of Reource Economic and Environmental Sociology, Univerity of Alberta, Canada

More information

Stratified Analysis of Probabilities of Causation

Stratified Analysis of Probabilities of Causation Stratified Analyi of Probabilitie of Cauation Manabu Kuroki Sytem Innovation Dept. Oaka Univerity Toyonaka, Oaka, Japan mkuroki@igmath.e.oaka-u.ac.jp Zhihong Cai Biotatitic Dept. Kyoto Univerity Sakyo-ku,

More information

RaneNote BESSEL FILTER CROSSOVER

RaneNote BESSEL FILTER CROSSOVER RaneNote BESSEL FILTER CROSSOVER A Beel Filter Croover, and It Relation to Other Croover Beel Function Phae Shift Group Delay Beel, 3dB Down Introduction One of the way that a croover may be contructed

More information

Random vs. Deterministic Deployment of Sensors in the Presence of Failures and Placement Errors

Random vs. Deterministic Deployment of Sensors in the Presence of Failures and Placement Errors Random v. Determinitic Deployment of Senor in the Preence of Failure and Placement Error Paul Baliter Univerity of Memphi pbalitr@memphi.edu Santoh Kumar Univerity of Memphi antoh.kumar@memphi.edu Abtract

More information

On the Isomorphism of Fractional Factorial Designs 1

On the Isomorphism of Fractional Factorial Designs 1 journal of complexity 17, 8697 (2001) doi:10.1006jcom.2000.0569, available online at http:www.idealibrary.com on On the Iomorphim of Fractional Factorial Deign 1 Chang-Xing Ma Department of Statitic, Nankai

More information

Estimation of Peaked Densities Over the Interval [0,1] Using Two-Sided Power Distribution: Application to Lottery Experiments

Estimation of Peaked Densities Over the Interval [0,1] Using Two-Sided Power Distribution: Application to Lottery Experiments MPRA Munich Peronal RePEc Archive Etimation of Peaed Denitie Over the Interval [0] Uing Two-Sided Power Ditribution: Application to Lottery Experiment Krzyztof Konte Artal Invetment 8. April 00 Online

More information

arxiv: v3 [hep-ph] 15 Sep 2009

arxiv: v3 [hep-ph] 15 Sep 2009 Determination of β in B J/ψK+ K Decay in the Preence of a K + K S-Wave Contribution Yuehong Xie, a Peter Clarke, b Greig Cowan c and Franz Muheim d arxiv:98.367v3 [hep-ph 15 Sep 9 School of Phyic and Atronomy,

More information

DYNAMIC MODELS FOR CONTROLLER DESIGN

DYNAMIC MODELS FOR CONTROLLER DESIGN DYNAMIC MODELS FOR CONTROLLER DESIGN M.T. Tham (996,999) Dept. of Chemical and Proce Engineering Newcatle upon Tyne, NE 7RU, UK.. INTRODUCTION The problem of deigning a good control ytem i baically that

More information

ON A CERTAIN FAMILY OF QUARTIC THUE EQUATIONS WITH THREE PARAMETERS. Volker Ziegler Technische Universität Graz, Austria

ON A CERTAIN FAMILY OF QUARTIC THUE EQUATIONS WITH THREE PARAMETERS. Volker Ziegler Technische Universität Graz, Austria GLASNIK MATEMATIČKI Vol. 1(61)(006), 9 30 ON A CERTAIN FAMILY OF QUARTIC THUE EQUATIONS WITH THREE PARAMETERS Volker Ziegler Techniche Univerität Graz, Autria Abtract. We conider the parameterized Thue

More information

Advanced Digital Signal Processing. Stationary/nonstationary signals. Time-Frequency Analysis... Some nonstationary signals. Time-Frequency Analysis

Advanced Digital Signal Processing. Stationary/nonstationary signals. Time-Frequency Analysis... Some nonstationary signals. Time-Frequency Analysis Advanced Digital ignal Proceing Prof. Nizamettin AYDIN naydin@yildiz.edu.tr Time-Frequency Analyi http://www.yildiz.edu.tr/~naydin 2 tationary/nontationary ignal Time-Frequency Analyi Fourier Tranform

More information

Factor Sensitivity Analysis with Neural Network Simulation based on Perturbation System

Factor Sensitivity Analysis with Neural Network Simulation based on Perturbation System 1402 JOURNAL OF COMPUTERS, VOL. 6, NO. 7, JULY 2011 Factor Senitivity Analyi with Neural Network Simulation baed on Perturbation Sytem Runbo Bai College of Water-Conervancy and Civil Engineering, Shandong

More information

PhysicsAndMathsTutor.com

PhysicsAndMathsTutor.com 1. A teacher wihe to tet whether playing background muic enable tudent to complete a tak more quickly. The ame tak wa completed by 15 tudent, divided at random into two group. The firt group had background

More information