M. Opper y. Institut fur Theoretische Physik. Justus-Liebig-Universitat Giessen. D-6300 Giessen, Germany. physik.uni-giessen.dbp.

Size: px
Start display at page:

Download "M. Opper y. Institut fur Theoretische Physik. Justus-Liebig-Universitat Giessen. D-6300 Giessen, Germany. physik.uni-giessen.dbp."

Transcription

1 Query by Committee H. S. Seung Racah Institute of Physics and Center for eural Comutation Hebrew University Jerusalem 9194, Israel M. Oer y Institut fur Theoretische Physik Justus-Liebig-Universitat Giessen D-63 Giessen, Germany manfred.oer@ hysik.uni-giessen.db.de H. Somolinsky Racah Institute of Physics and Center for eural Comutation Hebrew University Jerusalem 9194, Israel haim@galaxy.huji.ac.il Abstract We roose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the rincile of maximal disagreement. The algorithm is studied for two toy models: the high-low game and ercetron learning of another ercetron. As the number of queries goes to innity, the committee algorithm yields asymtotically nite information gain. This leads to generalization error that decreases exonentially with the numberof examles. This in marked contrast to learning from randomly chosen inuts, for which the information gain aroaches zero and the generalization error decreases with a relatively slow inverse ower law. We suggest that asymtotically nite information gain may be an imortant characteristic of good query algorithms. 1 Introduction Although query algorithms have been roosed for a variety of learning roblems[bau91], little work has gone into understanding the general rinciles by which these algorithms should be constructed. In this work, we argue that the Shannon information of a query can be a suitable guide[fed72]. We further show that the degree of disagreement among a committee of learners can serve as an estimate of this information value. The al- Present address: AT&T Bell Laboratories, 6 Mountain Ave., Murray Hill, J 7974, seung@hysics.att.com y Present address: Institut fur Theoretische Physik, Julius Maximilians Universitat Wurzburg, Am Hubland, D- 87 Wurzburg, Germany, oer@vax.rz.uni-wuerzburg. db.de gorithms considered in this work focus on minimizing the number of queries required, and hence are most relevant to situations in which queries carry the heaviest comutational cost. We consider the aradigm of incremental query learning, in which the training set is built u one examle at a time. We restrict our scoe to arametric learning models with continuously varying weights, learning erfectly realizable, boolean-valued target rules. The rior distribution on the weight sace is assumed to be at. An incremental learning rocedure consists of two comonents: a training algorithm and a query algorithm. Given a set of P examles, the training algorithm roduces a set of weights satisfying the training set. The query algorithm is then used to select examle P +1. Then the training algorithm is run again on the newly incremented training set, and so on. In this aer, the only training algorithm that we consider is the (zero temerature) Gibbs algorithm, which selects a weight vector at random from the version sace, the set of all weight vectors that are consistent with the training set. This will enable us to use techniques from statistical mechanics[sst92]. After training 2k students on the same training set, the query by committee algorithm selects an inut that is classied as ositive by half of the committee, and negative by the other half. By maximizing disagreement among the committee, the information gain of the query can be made high. In the k!1limit, each query bisects the version sace, so that the information gain saturates the bound of 1 bit er query. In the following, the query by committee algorithm is rst illustrated using a very simle model, the high-low game. We thenmove on to a more comlicated model, ercetron learning of another ercetron[gd89]. For both models, the information gain aroaches a nite value as the number of queries goes to innity. This asymtotically nite information gain leads to generalization error that decreases exonentially with the number of queries. This is in marked contrast to the case of learning with random inuts, in which the information gain

2 aroaches zero as the number of examles increases. In the random inut case, the generalization error decreases relatively slowly, as an inverse ower law in the number of examles. 2 The information content of a query Denote the target function, or \teacher," by (X), and the arametric learning model, or \student," by (W X). Both teacher and student are boolean-valued functions, i.e. mas into the set f1g. We further assume that the target function is erfectly realizable by the student, which means that there exists some weight vector W such that (X) = (W X) for all X. The inut vector X t is called a ositive or negative examle deending on the sign of the teacher's outut t = (X t ). The training set of inut-outut airs t =(X t t ) determines the version sace W P = fw : (W X t )= t t =1 ::: Pg (1) which is the set of all W consistent with the training set. If the rior distribution P (W) is assumed to be at, then the osterior distribution is uniform on the version sace, and vanishes outside[tls89]. This is written as P(Wj 1 ::: P )= V ;1 P W 2W P, otherwise, (2) where V P is the volume of W P.We consider the Gibbs training algorithm[hks91], in which theweight vector W is drawn at random from this osterior distribution. A query algorithm secies a way of choosing another inut X P +1 with which to query the teacher. In general, it can be written as a conditional robability P(X P +1 j 1 ::: P ) (3) which weleave unsecied for now. An inut X P +1 is chosen from this distribution and receives a label P +1 from the teacher. The results of this query are then added to the training set. The entroy of the osterior distribution (2) on the weight sace is S = log V P : (4) Since the entroy quanties our uncertainty about W, the information gained from query P + 1 can be dened as the reduction in the entroy, or I P +1 = ;S = ; log P +1 : (5) Here we have dened the volume ratio P +1 V P +1 V P : (6) The information gain I P +1 deends on the query sequence f 1 ::: P +1 g, although the deendence is not exlicit in the notation of (5). This deendence can be eliminated by considering averaged quantities. For examle, one can average the information gain (and other quantities of interest) over query sequences, and then over the rior distribution of W. For our uroses, only a artial average will suce. Holding the query sequence 1 ::: P constant, we will average over the inut X P +1 with resect to (3) and the teacher weight vector W with resect to the osterior distribution (2). In other words, the average is over all teacher vectors W that are consistent with the rst P examles, and over all inuts X P +1 given by the query algorithm. So the average information gain is given by hi P +1 i = ;hlog P +1 i W X P +1 (7) Similarly,we can calculate the comlete robability distribution for the volume ratio, which is P( P +1 j 1 ::: P )= P +1 ; V P +1 : V P W X P +1 (8) ote that these quantities still contain a deendence on the query sequence 1 ::: P. Performing the W average in (7) leads to a bayesian interretation of the formula. Any inut X P +1 divides the version sace W P into two arts, W + = fw 2W P : (W X P +1 )=+1g (9) W ; = fw 2W P : (W X P +1 )=;1g : (1) Averaging over the osterior distribution of W,we nd that the average information gain (7) is given by hi P +1 i = ; V + log V + ; V ; log V ; (11) V P V P V P V P XP +1 where V are the volumes of W and hence deend uon X P +1 imlicitly. After the teacher answers the query, P +1 is known with certainty. Before the answer arrives, the value of P +1 is uncertain: according to the bayesian, it is +1 with robability V + =V P, and ;1 with robability V ; =V P. The entroy of this distribution is recisely the information value of the query, and is the exression inside the average (11). The average information gain is maximized by X P +1 such that V + = V ;, i.e. by queries that divide the version sace in half. In this case of exact bisection, I = 1 bit exactly. Unfortunately, for most nontrivial learning models, the geometry of the version sace is comlex, and one cannot ractically calculate the volumes V for any given inut, much less nd an inut for which V + = V ;. Training algorithms tyically yield single oints in the version sace, not global information about the version sace. However, a committee of students can be used to obtain global information. Train a committee of 2k weight vectors using the Gibbs algorithm. Find an inut vector that is classied as a ositive examle by k members of the committee, and classied as negative by the other k. Query the teacher about this inut vector. Train the committee again using the new enlarged training set, and reeat. As k! 1 this algorithm aroaches the bisection algorithm. This algorithm is very much in the sirit of [OH91], in which consensus was used to imrove generalization erformance. Here we use lack of consensus to choose a query, or a rincile of maximal disagreement.

3 We dene the generalization function by g (W W )=h(;(w X)(W X))i X (12) where the average is taken over the rior distribution P (X) of inuts. This measures the robability oferror by a student W on inut-outut airs drawn from a teacher W. Given a query sequence 1 ::: P, the generalization error is dened by g ( 1 ::: P )=h g (W W )i W W (13) where the averages over W and W are taken over the osterior distribution. The average generalization error g (P ) is dened as the average of (13) over the query sequence. 3 High-Low The game of high-low, erhas the simlest model of query learning, can be formalized within our arametric learning framework. The teacher and student are (X) = sgn(x ; W ) (14) (X W) = sgn(x ; W ) : (15) The inut and weight saces are the unit interval [ 1], and the rior distributions on these saces are at, P (W )=1 (16) P (X) =1: (17) Given any query X, the teacher will resond with +1 or ;1, deending on whether X is higher or lower than W. Suose that there already exists a training set of P 2 examles. Let X L be the largest negative examle, and X R the smallest ositive examle. Then the version sace is W P =[X L X R ] (18) with volume V P = X R ; X L : (19) The osterior weight distribution P P (W )= V ;1 P W 2 [X L X R ], otherwise, (2) has entroy S = log V P. The Gibbs training algorithm simly icks a W at random from the version sace, i.e. from the interval [X L X R ]. We rst consider the case of randomly chosen inuts. With robability 1 ; V P the next inut X P +1 will fall outside the version sace, so that the volume ratio will be 1. If X P +1 falls inside the version sace, the robability distribution of the volume ratio is given by 2, where this result is obtained by averaging over all X P +1 W 2 [X L X R ]. Combining these two alternatives, we nd that the robability distribution of the volume ratio is P( P +1 jv P )=(1; V P )( P +1 ; 1) + V P 2 P +1 : (21) The exected information gain is given by the average of ; log P +1 with resect to this distribution, or I P +1 = V P 2 (nat) (22) As the number of examles increases, the volume V P shrinks, and the information gain tends to zero. The generalization error (13) is linear in the volume of the version sace, g ( 1 ::: P )= 1 3 V P : (23) Averaging this over all ossible training sets, one can show that the average generalization error satises g (P )= P +2 (24) an inverse ower law in the number of examles. Maximal information gain can be attained by the bisection algorithm, in which X P +1 is chosen to be halfway between X L and X R. In this case, the robability distribution of the volume ratio is P() = ; 1 (25) 2 so that the information gain is 1 bit er query. volume decreases exonentially, The V P =2 ;P : (26) Indeed, our knowledge of the binary reresentation of W increases by one bit er query. For a committee with 2k members, the robability distribution of the volume ratio is given by P() / 2 dy 1 dz yk;1 (1 ; z) k;1 z ; y (27) where we have omitted the normalization constant. Here we have written P() rather than P( P ), since the distribution is indeendent of the query history, unlike the random inut case (21). Figure 1 shows (27) for various values of k. ote that as k aroaches innity, the curves aroach the delta function (25) of the bisection algorithm. The average information gain in nats is given by I(k) = 1 d log P() (28) = (2k +2); (k +2); (k +2) (k +2); (k) where (x) =; (x)=;(x) is the Euler -function. As k!1, this saturates the bound of 1 bit er query. Since g = V P =3 is the roduct of the volume ratios, log g = PX t=1 log t + const : (29)

4 P() k =1 k =2 k =4 k = Figure 1: Probability distribution of the volume ratio for a 2k-member committee laying the high-low game. The volume ratios t are indeendent, identically distributed random variables drawn from (27). Hence by the central limit theorem g aroaches a log-normal random variable as P!1. The most robable g (P ) scales like g (P ) e ;PI(k) (3) where I(k) is given by (28). 1 Thus for query learning of high-low there is a very simle exonential relationshi between information gain and generalization error. In the next section we will see that a similar relationshi can be written for ercetrons. 4 Percetron Learning We now consider ercetron learning of another ercetron, where the teacher and student are given by (X) = sgn(w X) (31) (X W) = sgn(w X) : (32) The vectors W, W, and X all have comonents. The weight sace is taken to be the hyershere W = fw : W W = g (33) and the inut distribution is Gaussian P (X) =(2) ;=2 ex ; 1 2XX : (34) Given P examles the osterior distribution is uniform on the version sace W P = fw 2W:(W X t )(W X t ) > 8t =1 :::Pg (35) 1 We could also calculate the average of g rather than log g. The coecient of P in the resulting exonential is given by loghi rather than hlog i. 4.1 Random inuts When all inuts are chosen at random from the distribution (34), the relica method can be used to calculate the entroy of the osterior distribution[gt9]. The calculation is exact in the thermodynamic limit, where P!1with = P= constant. The entroy er weight s S= is then s() = 1 2 log(1 ; q)+1 2 q +2 Dx H(x) log H(x) where Dx H(y) (36) r q 1 ; q (37) dx e ;x2 =2 2 1 (38) Dx : (39) The entroy must be extremized with resect to the order arameter q, from which the average generalization error can be obtained using the relation y g = ;1 cos ;1 q: (4) In the large limit, this leads to the inverse ower law behavior g () :625 (41) The large asymtotics can also be obtained by examining the scaling of the entroy with the generalization error, similar to the arguments in [SST92] using the microcanonical high-t limit for a general classication of learning curves. As q! 1, the rst term of (36)

5 dominates, so that the entroy has a simle logarithmic deendence on the generalization error s log g (42) where we have used the asymtotic result g ;1 2(1 ; q). The information gain is given by = ;2 ;2 1 ; q Dx H(x)logH(x) dxh(x) log H(x) 1:6 g (43) This condition combined with (42) can only be satised by the inverse ower law g 1=, as in (41). We have erformed a Monte Carlo simulation of this algorithm. The deterministic ercetron algorithm[mp88] was used to obtain a weight vector in the version sace. Then zero-temerature Monte Carlo was used to random walk inside the version sace. To maintain accetance rates of aroximately 5%, the size of the Monte Carlo ste was scaled downward with increasing. The resulting learning curve is shown in Fig. 2, and ts the analytic results obtained through the relica method. 4.2 Query by committee Because each query deends on the revious history of queries, the committee algorithm is a dynamical rocess, where the number of examles lays the role of \time." In the relica calculations for this algorithm, we must know the overlas between weight vectors at dierent \times." This makes the calculations much more dicult than for the case of random-inuts. The relica calculation in the aendix makes the simlifying assumtion that the tyical overla between weight vectors at dierent \times" is equal to the tyical overla of two vectors at the earlier \time." In other words, we assume that the overlas satisfy W t W t = q(minft t g) (44) and are self-averaging quantities. Taking the thermodynamic limit!1turns the discrete-time dynamics in t into a continuous-time dynamics in t=. The entroy of the weight sace at time is calculated by treating q( ) for revious \times" <as an external eld. The resulting saddle oint equation for q() is an integral equation that can be solved numerically. The relica results for the two-member committee are shown in Fig. 2, along with Monte Carlo simulations. In the simulations, zero-temerature Gibbs training was imlemented by training two ercetrons using the standard ercetron algorithm, and then equilibrating them with zero-temerature Monte Carlo. Inut vectors were then selected at random from the rior distribution (34) until an inut that roduced disagreement was found. The teacher was queried on this inut, and (nat) (bit) Table 1: Information gain of the query by committee algorithm for ercetron learning the inut and the teacher's outut were added to the training set. ote that the generalization error of the two-member committee is very close to the random inut results for of order 1 or less. Even for relatively large, dierent committee sizes (not shown) erform almost identically. However, the algorithms are quite dierent in their large asymtotic roerties. By taking the derivative of the committee entroy (66) with resect to, we nd that the information gain aroaches the limit I()!;2 R dxhk (x)h k (;x)h(x) log H(x) R dxhk (x)h k (;x) (45) as!1. These limiting values for some small k can be found in Table 1. As k!1, the information gain saturates the bound of one bit er query. How does the information gain aect the generalization erformance? For the committee algorithm, we have found that the information gain aroaches a nite constant value ds!;i(1) (46) d We assume that the entroy is still given by s() log g as in (42). This assumtion, which isshown to be consistent in the aendix, leads to the asymtotic result g e ;I(1) : (47) The generalization error is asymtotically exonential in, with a decay constant determined by the information gain. This is markedly faster than the inverse ower law (41) for random inuts. In our imlementation of the committee algorithms, a source of random inuts is screened by a committee until one is found that rovokes disagreement. A ractical drawback ofthisscheme is that the time to nd a query diverges like the inverse of the generalization error. In this resect, algorithms which construct queries directly may be suerior. For examle, the query algorithm roosed by Kinzel and Rujan[KR9, WR92] for ercetron learning constructs inut vectors that are erendicular to the student weight vector. In conjunction with the Gibbs training algorithm, this method yields generalization erformance only slightly worse than that of the committee algorithms for moderate, but much faster query times. However, the committee algorithms aear to ossess a generality that other query algorithms do not.

6 g committee of two random inuts relica relica Figure 2: Generalization curves for ercetron learning using the random-inut, committee, and minimal training set algorithms. The Monte Carlo simulations were done with = 25, averaged over 64 samles. After each query, the ercetron was equilibrated for 124 stes. The standard error of measurement is smaller than the size of the symbol. The solid lines are analytic results from relica theory. 5 Conclusion We have comared query by committee and random{ inut training for the case of the ercetron. For random inuts, the information gain aroaches zero with the generalization error, I() ; ds d g : (48) For the committee algorithm, the information gain is asymtotically nite where I(1) > is given by (45). I()! I(1) (49) For ercetron learning, the entroy behaves asymtotically as s S log g (5) for both random{inut and query algorithms. Given ds=d and s in terms of g, one can derive the generalization curve as a function of. For random inuts, (48) and (5) imly g 1 : (51) For the committee algorithms, (49) and (5) imly log g ;I(1) : (52) The high-low game and the ercetron have rovided simle illustrations of the query by committee algorithm, and of the relationshi between information gain and generalization error. However, we have not addressed the issue of whether the behaviors exhibited by these toy models are general. For what architectures does query by committee lead to asymtotically nite information gain? Under what conditions does asymtotically nite information gain lead to exonential generalization curves? These questions are currently under investigation. Acknowledgements We acknowledge helful discussions with Y. Freund, M. Griniasty, D. Hansel,. Tishby, and T. Watkin. References [Bau91] E. Baum. eural net algorithms that learn in olynomial time from examles and queries. IEEE Trans. in eural etworks, 2:5{19, [Fed72] V. V. Fedorov. Theory of Otimal Exeriments. Academic Press, ew York, [GD89] E. Gardner and B. Derrida. Three unnished works on the otimal storage caacity of networks. J. Phys., A22:1983{1994, [GT9] G. Gyorgyi and. Tishby. Statistical theory of learning a rule. In W. K. Theumann and R. Koberle, editors, eural etworks and Sin Glasses, ages 3{36, Singaore, 199. World Scientic. [HKS91] D. Haussler, M. Kearns, and R. Schaire. Bounds on the samle comlexity ofbayesian

7 learning using information theory and the VC dimension. In M. K. Warmuth and L. G. Valiant, editors, Proceedings of the Fourth Annual Worksho on Comutational Learning Theory, ages 61{74, San Mateo, CA, Morgan Kaufmann. [KR9] W. Kinzel and P. Rujan. Imroving a network generalization ability by selecting examles. Eurohys. Lett., 13:473{477, 199. [MP88] M. L. Minsky and S. Paert. Percetrons. MIT Press, Cambridge, exanded edition, [OH91] M. Oer and D. Haussler. Generalization erformance of bayes otimal classication algorithm for learning a ercetron. Phys. Rev. Lett., 66:2677{268, [SST92] H. S. Seung, H. Somolinsky, and. Tishby. Statistical mechanics of learning from examles. Phys. Rev., A45:656{691, [TLS89]. Tishby, E. Levin, and S. Solla. Consistent inference of robabilities in layered networks: Predictions and generalization. In Proc. Int. Joint Conf. on eural etworks, volume 2, ages 43{49, Washington, DC, IEEE. [WR92] T. L. H. Watkin and A. Rau. Selecting examles for ercetrons. J. Phys., A25:113{121, Aendix: Relica calculations for query by committee The volume V P of the version sace determines the entroy S =lnv P for a xed teacher W. The average of the entroy over all ossible query sequences 1 ::: P consistent with the teacher is denoted by S av. From this quantity we shall nd the overla q between two random weight vectors inside the version sace W P.If we take one them to be the target vector, this in turn will give usthegeneralization error on a new random inut. We begin with the denition S av = hln V P i fx t g (53) = ln hhv n P i fx t g i W (54) In the second line we have introduced the relica trick, and an average of W over the rior distribution of teachers has been added, as in [OH91]. This makes the symmetry of the teacher and the students exlicit, so that the teacher vector W can be treated as another S av = lim ln Tr f t g * PY t=1 ny = ny = dw a ( t W X t ) + fx t g (55) ote that for all integer n we have now n + 1 relicas. Recall that for the k = 2 committee, the t + 1st inut X t+1 is selected so as to cause disagreement between two committee members W t and + Wt ; drawn at random from the version sace W t.thus the robability density for X t+1 is P(X t+1 jw t + Wt ; ) /P (X t+1 ) X =1 (W t + Xt+1 )(;W; t Xt+1 )(56) where a refactor R must be added to achieve the correct normalization dx t+1 P(X t+1 jw+ t W;) t = 1. The rior distribution of atterns P (X) is the Gaussian P (X) =(2) =2 e ; 1 2XX : (57) Let us consider the average over X t+1, with the committee weights W t + and W; t xed for the time being. * ny = = = ( t+1 W X t+1 ) X =1 + X t+1 dx t+1 n Y = ( t+1 W X t+1 )P(X t+1 jw t + W t ;) dx t+1 P (X t+1 ) X =1 ny = ( t+1 W X t+1 ) (W t + X t+1 )(;W t ; X t+1 ) (58) dx(w t + X)(;W t ; X)P (X) The average with resect to P (X t+1 )involves n +3 quantities W a X t =, where a canbeany of the symbols 1 ::: n + ;. These quantities are Gaussian random variables with zero mean, and covariance given by Wa X! ;1 W b X = W a W b = q ab : (59) P (X) The average (58) deends on the vectors W a only through these overlas q ab. In accord with the ansatz (44) and the ansatz of relica symmetry, we assume q = +(1; )q (6) q = q(t) (61) q = 1 (62) q = q(t) : (63) We now make the relacements W t X t+1! x q(t)+z 1 ; q(t) (64) W X t+1! x q(t)+y q ; q(t) +z 1 ; q: (65)

8 where x y z z are uncorrelated Gaussian random variables of unit variance. This change of variables yields the roer covariances. This average must be done for every X t, and yields an entroy that deends on the order arameters q(t), t = 1 to P ; 1, and on the overla q at time P of the weight vectors W. We treat the q(t) as external elds and determine q variationally. This is valid rovided that the q(t) are self-averaging quantities, so that the uctuations in the committee members at times t<pdo not aect the calculation of the entroy at time P. To take the continuum limit!1,we de- ne P = and t = and relace the roduct Q P t=1 (:::)by ex( R d ln(:::). Finally, the W integrals are done, as usual, by introducing entroic terms containing q. Generalizing to a committee of 2k members, we nd for the entroy er weight s = 1 2 q + 1 ln(1 ; q) (66) 2 where + 2 R R d Dx DyHk ( x)h k (; x)h(u)lnh(u) R DxHk ( x)h k (; x) s u x q( ) 1 ; q( ) (67) s q( ) 1 ; q + y q ; q( ) : (68) 1 ; q s The value of q = q() is determined variationally from (66). The information gain can be calculated by dierentiating (66) with resect to, R DxHk (x)h k (;x)h(x) log H(x) I(q) =;2 R DxHk (x)h k (;x) (69) where is dened as in (37). As q! 1, this yields the asymtotic result (45). Assuming that the generalization error is asymtotically exonential in, one can show that the disorder term in (66), the integral over, aroaches a constant. Hence the ln(1;q) term dominates, s log g, and the derivation in the text of asymtotically exonential g is selfconsistent.

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split A Bound on the Error of Cross Validation Using the Aroximation and Estimation Rates, with Consequences for the Training-Test Slit Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 7974 mkearns@research.att.com

More information

A Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve

A Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve In Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson and R.P. Limann, eds. Morgan Kaumann Publishers, San Mateo CA, 1995,. 950{957. A Simle Weight Decay Can Imrove Generalization

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018 Comuter arithmetic Intensive Comutation Annalisa Massini 7/8 Intensive Comutation - 7/8 References Comuter Architecture - A Quantitative Aroach Hennessy Patterson Aendix J Intensive Comutation - 7/8 3

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Improved Capacity Bounds for the Binary Energy Harvesting Channel Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,

More information

Estimating Time-Series Models

Estimating Time-Series Models Estimating ime-series Models he Box-Jenkins methodology for tting a model to a scalar time series fx t g consists of ve stes:. Decide on the order of di erencing d that is needed to roduce a stationary

More information

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points. Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the

More information

Notes on Instrumental Variables Methods

Notes on Instrumental Variables Methods Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of

More information

The non-stochastic multi-armed bandit problem

The non-stochastic multi-armed bandit problem Submitted for journal ublication. The non-stochastic multi-armed bandit roblem Peter Auer Institute for Theoretical Comuter Science Graz University of Technology A-8010 Graz (Austria) auer@igi.tu-graz.ac.at

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

Research Note REGRESSION ANALYSIS IN MARKOV CHAIN * A. Y. ALAMUTI AND M. R. MESHKANI **

Research Note REGRESSION ANALYSIS IN MARKOV CHAIN * A. Y. ALAMUTI AND M. R. MESHKANI ** Iranian Journal of Science & Technology, Transaction A, Vol 3, No A3 Printed in The Islamic Reublic of Iran, 26 Shiraz University Research Note REGRESSION ANALYSIS IN MARKOV HAIN * A Y ALAMUTI AND M R

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

Chapter 3. GMM: Selected Topics

Chapter 3. GMM: Selected Topics Chater 3. GMM: Selected oics Contents Otimal Instruments. he issue of interest..............................2 Otimal Instruments under the i:i:d: assumtion..............2. he basic result............................2.2

More information

ECE 534 Information Theory - Midterm 2

ECE 534 Information Theory - Midterm 2 ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation Paer C Exact Volume Balance Versus Exact Mass Balance in Comositional Reservoir Simulation Submitted to Comutational Geosciences, December 2005. Exact Volume Balance Versus Exact Mass Balance in Comositional

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

Universal Finite Memory Coding of Binary Sequences

Universal Finite Memory Coding of Binary Sequences Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv

More information

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V.

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deriving ndicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deutsch Centre for Comutational Geostatistics Deartment of Civil &

More information

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:

More information

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Asymptotically Optimal Simulation Allocation under Dependent Sampling Asymtotically Otimal Simulation Allocation under Deendent Samling Xiaoing Xiong The Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA, xiaoingx@yahoo.com Sandee

More information

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar 15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider

More information

On a Markov Game with Incomplete Information

On a Markov Game with Incomplete Information On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)] LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for

More information

Named Entity Recognition using Maximum Entropy Model SEEM5680

Named Entity Recognition using Maximum Entropy Model SEEM5680 Named Entity Recognition using Maximum Entroy Model SEEM5680 Named Entity Recognition System Named Entity Recognition (NER): Identifying certain hrases/word sequences in a free text. Generally it involves

More information

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process P. Mantalos a1, K. Mattheou b, A. Karagrigoriou b a.deartment of Statistics University of Lund

More information

On 2-Armed Gaussian Bandits and Optimization

On 2-Armed Gaussian Bandits and Optimization On -Armed Gaussian Bandits and Otimization William G. Macready David H. Wolert SFI WORKING PAPER: 996--9 SFI Working Paers contain accounts of scientific work of the author(s) and do not necessarily reresent

More information

arxiv: v3 [physics.data-an] 23 May 2011

arxiv: v3 [physics.data-an] 23 May 2011 Date: October, 8 arxiv:.7v [hysics.data-an] May -values for Model Evaluation F. Beaujean, A. Caldwell, D. Kollár, K. Kröninger Max-Planck-Institut für Physik, München, Germany CERN, Geneva, Switzerland

More information

Otimal exercise boundary for an American ut otion Rachel A. Kuske Deartment of Mathematics University of Minnesota-Minneaolis Minneaolis, MN 55455 e-mail: rachel@math.umn.edu Joseh B. Keller Deartments

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Lending Direction to Neural Networks. Richard S. Zemel North Torrey Pines Rd. La Jolla, CA Christopher K. I.

Lending Direction to Neural Networks. Richard S. Zemel North Torrey Pines Rd. La Jolla, CA Christopher K. I. Lending Direction to Neural Networks Richard S Zemel CNL, The Salk Institute 10010 North Torrey Pines Rd La Jolla, CA 92037 Christoher K I Williams Deartment of Comuter Science University of Toronto Toronto,

More information

arxiv:cond-mat/ v2 25 Sep 2002

arxiv:cond-mat/ v2 25 Sep 2002 Energy fluctuations at the multicritical oint in two-dimensional sin glasses arxiv:cond-mat/0207694 v2 25 Se 2002 1. Introduction Hidetoshi Nishimori, Cyril Falvo and Yukiyasu Ozeki Deartment of Physics,

More information

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelsach, K. P. White, and M. Fu, eds. EFFICIENT RARE EVENT SIMULATION FOR HEAVY-TAILED SYSTEMS VIA CROSS ENTROPY Jose

More information

Period-two cycles in a feedforward layered neural network model with symmetric sequence processing

Period-two cycles in a feedforward layered neural network model with symmetric sequence processing PHYSICAL REVIEW E 75, 4197 27 Period-two cycles in a feedforward layered neural network model with symmetric sequence rocessing F. L. Metz and W. K. Theumann Instituto de Física, Universidade Federal do

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R.

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R. 1 Corresondence Between Fractal-Wavelet Transforms and Iterated Function Systems With Grey Level Mas F. Mendivil and E.R. Vrscay Deartment of Alied Mathematics Faculty of Mathematics University of Waterloo

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

New Information Measures for the Generalized Normal Distribution

New Information Measures for the Generalized Normal Distribution Information,, 3-7; doi:.339/info3 OPEN ACCESS information ISSN 75-7 www.mdi.com/journal/information Article New Information Measures for the Generalized Normal Distribution Christos P. Kitsos * and Thomas

More information

dn i where we have used the Gibbs equation for the Gibbs energy and the definition of chemical potential

dn i where we have used the Gibbs equation for the Gibbs energy and the definition of chemical potential Chem 467 Sulement to Lectures 33 Phase Equilibrium Chemical Potential Revisited We introduced the chemical otential as the conjugate variable to amount. Briefly reviewing, the total Gibbs energy of a system

More information

The analysis and representation of random signals

The analysis and representation of random signals The analysis and reresentation of random signals Bruno TOÉSNI Bruno.Torresani@cmi.univ-mrs.fr B. Torrésani LTP Université de Provence.1/30 Outline 1. andom signals Introduction The Karhunen-Loève Basis

More information

Recent Developments in Multilayer Perceptron Neural Networks

Recent Developments in Multilayer Perceptron Neural Networks Recent Develoments in Multilayer Percetron eural etworks Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, Texas 75265 walter.delashmit@lmco.com walter.delashmit@verizon.net Michael

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

A New Perspective on Learning Linear Separators with Large L q L p Margins

A New Perspective on Learning Linear Separators with Large L q L p Margins A New Persective on Learning Linear Searators with Large L q L Margins Maria-Florina Balcan Georgia Institute of Technology Christoher Berlind Georgia Institute of Technology Abstract We give theoretical

More information

Coding Along Hermite Polynomials for Gaussian Noise Channels

Coding Along Hermite Polynomials for Gaussian Noise Channels Coding Along Hermite olynomials for Gaussian Noise Channels Emmanuel A. Abbe IG, EFL Lausanne, 1015 CH Email: emmanuel.abbe@efl.ch Lizhong Zheng LIDS, MIT Cambridge, MA 0139 Email: lizhong@mit.edu Abstract

More information

ute measures of uncertainty called standard errors for these b j estimates and the resulting forecasts if certain conditions are satis- ed. Note the e

ute measures of uncertainty called standard errors for these b j estimates and the resulting forecasts if certain conditions are satis- ed. Note the e Regression with Time Series Errors David A. Dickey, North Carolina State University Abstract: The basic assumtions of regression are reviewed. Grahical and statistical methods for checking the assumtions

More information

CSC165H, Mathematical expression and reasoning for computer science week 12

CSC165H, Mathematical expression and reasoning for computer science week 12 CSC165H, Mathematical exression and reasoning for comuter science week 1 nd December 005 Gary Baumgartner and Danny Hea hea@cs.toronto.edu SF4306A 416-978-5899 htt//www.cs.toronto.edu/~hea/165/s005/index.shtml

More information

Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis

Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis HIPAD LAB: HIGH PERFORMANCE SYSTEMS LABORATORY DEPARTMENT OF CIVIL AND ENVIRONMENTAL ENGINEERING AND EARTH SCIENCES Bayesian Model Averaging Kriging Jize Zhang and Alexandros Taflanidis Why use metamodeling

More information

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations

Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations Characterizing the Behavior of a Probabilistic CMOS Switch Through Analytical Models and Its Verification Through Simulations PINAR KORKMAZ, BILGE E. S. AKGUL and KRISHNA V. PALEM Georgia Institute of

More information

Johan Lyhagen Department of Information Science, Uppsala University. Abstract

Johan Lyhagen Department of Information Science, Uppsala University. Abstract Why not use standard anel unit root test for testing PPP Johan Lyhagen Deartment of Information Science, Usala University Abstract In this aer we show the consequences of alying a anel unit root test that

More information

Bayesian Spatially Varying Coefficient Models in the Presence of Collinearity

Bayesian Spatially Varying Coefficient Models in the Presence of Collinearity Bayesian Satially Varying Coefficient Models in the Presence of Collinearity David C. Wheeler 1, Catherine A. Calder 1 he Ohio State University 1 Abstract he belief that relationshis between exlanatory

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract

A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS CASEY BRUCK 1. Abstract The goal of this aer is to rovide a concise way for undergraduate mathematics students to learn about how rime numbers behave

More information

Estimation and Detection

Estimation and Detection Estimation and Detection Lecture : Detection Theory Unknown Parameters Dr. ir. Richard C. Hendriks //05 Previous Lecture H 0 : T (x) < H : T (x) > Using detection theory, rules can be derived on how to

More information

An Improved Calibration Method for a Chopped Pyrgeometer

An Improved Calibration Method for a Chopped Pyrgeometer 96 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17 An Imroved Calibration Method for a Choed Pyrgeometer FRIEDRICH FERGG OtoLab, Ingenieurbüro, Munich, Germany PETER WENDLING Deutsches Forschungszentrum

More information

Introduction Model secication tests are a central theme in the econometric literature. The majority of the aroaches fall into two categories. In the r

Introduction Model secication tests are a central theme in the econometric literature. The majority of the aroaches fall into two categories. In the r Reversed Score and Likelihood Ratio Tests Geert Dhaene Universiteit Gent and ORE Olivier Scaillet Universite atholique de Louvain January 2 Abstract Two extensions of a model in the resence of an alternative

More information

3.4 Design Methods for Fractional Delay Allpass Filters

3.4 Design Methods for Fractional Delay Allpass Filters Chater 3. Fractional Delay Filters 15 3.4 Design Methods for Fractional Delay Allass Filters Above we have studied the design of FIR filters for fractional delay aroximation. ow we show how recursive or

More information

Estimating function analysis for a class of Tweedie regression models

Estimating function analysis for a class of Tweedie regression models Title Estimating function analysis for a class of Tweedie regression models Author Wagner Hugo Bonat Deartamento de Estatística - DEST, Laboratório de Estatística e Geoinformação - LEG, Universidade Federal

More information

On Code Design for Simultaneous Energy and Information Transfer

On Code Design for Simultaneous Energy and Information Transfer On Code Design for Simultaneous Energy and Information Transfer Anshoo Tandon Electrical and Comuter Engineering National University of Singaore Email: anshoo@nus.edu.sg Mehul Motani Electrical and Comuter

More information

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.

More information

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,

More information

MAKING WALD TESTS WORK FOR. Juan J. Dolado CEMFI. Casado del Alisal, Madrid. and. Helmut Lutkepohl. Humboldt Universitat zu Berlin

MAKING WALD TESTS WORK FOR. Juan J. Dolado CEMFI. Casado del Alisal, Madrid. and. Helmut Lutkepohl. Humboldt Universitat zu Berlin November 3, 1994 MAKING WALD TESTS WORK FOR COINTEGRATED VAR SYSTEMS Juan J. Dolado CEMFI Casado del Alisal, 5 28014 Madrid and Helmut Lutkeohl Humboldt Universitat zu Berlin Sandauer Strasse 1 10178 Berlin,

More information

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS Massimo Vaccarini Sauro Longhi M. Reza Katebi D.I.I.G.A., Università Politecnica delle Marche, Ancona, Italy

More information

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th 18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider

More information

Actual exergy intake to perform the same task

Actual exergy intake to perform the same task CHAPER : PRINCIPLES OF ENERGY CONSERVAION INRODUCION Energy conservation rinciles are based on thermodynamics If we look into the simle and most direct statement of the first law of thermodynamics, we

More information

Generalized Coiflets: A New Family of Orthonormal Wavelets

Generalized Coiflets: A New Family of Orthonormal Wavelets Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University

More information

Topic 7: Using identity types

Topic 7: Using identity types Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules

More information

PHYS 301 HOMEWORK #9-- SOLUTIONS

PHYS 301 HOMEWORK #9-- SOLUTIONS PHYS 0 HOMEWORK #9-- SOLUTIONS. We are asked to use Dirichlet' s theorem to determine the value of f (x) as defined below at x = 0, ± /, ± f(x) = 0, - < x

More information

SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING. Ruhul SARKER. Xin YAO

SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING. Ruhul SARKER. Xin YAO Yugoslav Journal of Oerations Research 13 (003), Number, 45-59 SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING Ruhul SARKER School of Comuter Science, The University of New South Wales, ADFA,

More information

Bayesian Networks Practice

Bayesian Networks Practice Bayesian Networks Practice Part 2 2016-03-17 Byoung-Hee Kim, Seong-Ho Son Biointelligence Lab, CSE, Seoul National University Agenda Probabilistic Inference in Bayesian networks Probability basics D-searation

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

Principles of Computed Tomography (CT)

Principles of Computed Tomography (CT) Page 298 Princiles of Comuted Tomograhy (CT) The theoretical foundation of CT dates back to Johann Radon, a mathematician from Vienna who derived a method in 1907 for rojecting a 2-D object along arallel

More information

ECON Answers Homework #2

ECON Answers Homework #2 ECON 33 - Answers Homework #2 Exercise : Denote by x the number of containers of tye H roduced, y the number of containers of tye T and z the number of containers of tye I. There are 3 inut equations that

More information

Metrics Performance Evaluation: Application to Face Recognition

Metrics Performance Evaluation: Application to Face Recognition Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,

More information

Cubic Sieve Congruence of the Discrete Logarithm Problem, and Fractional Part Sequences

Cubic Sieve Congruence of the Discrete Logarithm Problem, and Fractional Part Sequences Cubic Sieve Congruence of the Discrete Logarithm Problem, and Fractional Part Sequences Srinivas Vivek University of Luxembourg, Luxembourg C. E. Veni Madhavan Deartment of Comuter Science and Automation,

More information

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators S. K. Mallik, Student Member, IEEE, S. Chakrabarti, Senior Member, IEEE, S. N. Singh, Senior Member, IEEE Deartment of Electrical

More information

1 Probability Spaces and Random Variables

1 Probability Spaces and Random Variables 1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1

More information

On Wald-Type Optimal Stopping for Brownian Motion

On Wald-Type Optimal Stopping for Brownian Motion J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of

More information

q-ary Symmetric Channel for Large q

q-ary Symmetric Channel for Large q List-Message Passing Achieves Caacity on the q-ary Symmetric Channel for Large q Fan Zhang and Henry D Pfister Deartment of Electrical and Comuter Engineering, Texas A&M University {fanzhang,hfister}@tamuedu

More information

1-way quantum finite automata: strengths, weaknesses and generalizations

1-way quantum finite automata: strengths, weaknesses and generalizations 1-way quantum finite automata: strengths, weaknesses and generalizations arxiv:quant-h/9802062v3 30 Se 1998 Andris Ambainis UC Berkeley Abstract Rūsiņš Freivalds University of Latvia We study 1-way quantum

More information

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino

More information

δq T = nr ln(v B/V A )

δq T = nr ln(v B/V A ) hysical Chemistry 007 Homework assignment, solutions roblem 1: An ideal gas undergoes the following reversible, cyclic rocess It first exands isothermally from state A to state B It is then comressed adiabatically

More information

Research of power plant parameter based on the Principal Component Analysis method

Research of power plant parameter based on the Principal Component Analysis method Research of ower lant arameter based on the Princial Comonent Analysis method Yang Yang *a, Di Zhang b a b School of Engineering, Bohai University, Liaoning Jinzhou, 3; Liaoning Datang international Jinzhou

More information

Spectral Analysis by Stationary Time Series Modeling

Spectral Analysis by Stationary Time Series Modeling Chater 6 Sectral Analysis by Stationary Time Series Modeling Choosing a arametric model among all the existing models is by itself a difficult roblem. Generally, this is a riori information about the signal

More information

STA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2

STA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2 STA 25: Statistics Notes 7. Bayesian Aroach to Statistics Book chaters: 7.2 1 From calibrating a rocedure to quantifying uncertainty We saw that the central idea of classical testing is to rovide a rigorous

More information