A General Theory of Information Cost Incurred by Successful Search

Size: px
Start display at page:

Download "A General Theory of Information Cost Incurred by Successful Search"

Transcription

1 A General Theory of Inforation Cost Incurred by Successful Search Willia A. Debski *, Winston Ewert 2 and Robert J. Marks II 2 Discovery Institute, 208 Colubia Street, Seattle, WA Electrical & Coputer Engineering, One Bear Place #97356, Baylor University, Waco, TX (*Corresponding author: depski@discovery.org) Abstract This paper provides a general fraework for understanding targeted search. It begins by defining the search atrix, which akes explicit the sources of inforation that can affect search progress. The search atrix enables a search to be represented as a probability easure on the original search space. This representation facilitates tracking the inforation cost incurred by successful search (success being defined as finding the target). To categorize such costs, various inforation and efficiency easures are defined, notably, active inforation. Conservation of inforation characterizes these costs and is precisely forulated via two theores, one restricted (proved in previous work of ours), the other general (proved for the first tie here). The restricted version assues a unifor probability search baseline, the general, an arbitrary probability search baseline. When a search with probability q of success displaces a baseline search with probability p of success where q > p, conservation of inforation states that raising the probability of successful search by a factor of q/p(>) incurs an inforation cost of at least log(q/p). Conservation of inforation shows that inforation, like oney, obeys strict accounting principles. Key words: Search atrix, targeted search, active inforation, probabilistic hierarchy, unifor probability, conservation of inforation. The Search Matrix All but the ost trivial searches are needle-in-the-haystack probles. Yet any searches successfully locate needles in haystacks. How is this possible? A successful search locates a target in a anageable nuber of steps. According to conservation of inforation, nontrivial searches can be successful only by drawing on existing external inforation, outputting no ore inforation than was inputted []. In previous work, we ade assuptions that liited the generality of conservation of inforation, such as assuing that the baseline against which search perforance is evaluated ust be a unifor probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we reove such constraints and show that 26

2 A General Theory of Inforation Cost 27 conservation of inforation holds quite generally. We continue to assue that targets are fixed. Search for fuzzy and oveable targets will be the topic of future research by the Evolutionary Inforatics Lab. In generalizing conservation of inforation, we first generalize what we ean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search ay be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches ay be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we suggest that readers study these first three sections, if only to appreciate the full generality of the approach to search we are proposing and also to understand why attepts to circuvent conservation of inforation via certain types of searches fail. Indeed, as we shall see, such attepts to bypass conservation of inforation look to searches that fall under the general approach outlined here; oreover, conservation of inforation, as foralized here, applies to all these cases. In first generalizing targeted search before generalizing conservation of inforation, we introduce the search atrix. The eleents that constitute the search atrix ay be illustrated as follows. Iagine a gigantic table that is iles in both length and width. Covering the table are upside-down dixie cups that are tightly packed, such as the following hexagonal packing: Under each dixie cup resides a single pea. The cups are opaque, so the peas are not visible unless the cup is lifted. The peas coe in two varieties, high-yield and lowyield (the difference being that high-yield peas, if planted, produce lots of peas whereas low-yield peas produce only few). The low-yield peas far outnuber the high-yield peas. Our task is to locate a high-yield pea. The high-yield peas therefore for the target. Because the table is so large and the cups are tightly packed, for a huan to try to walk around the table and turn over cups is infeasible.

3 28 W. A. Debski, W. Ewert and R. J. Marks II We therefore iagine a reote-controlled toy helicopter flying over the table, hovering over individual cups, and lifting a given cup to exaine the pea under it. Each act of lifting a cup to exaine the pea under it constitutes a single query. Because the table is so large and the high-yield peas are so few, this search constitutes a needle-in-a-haystack proble. As with all such probles, the nuber of queries (i.e., attepts to locate the needle, or, in this case, to locate a high-yield pea) is strictly liited. Moreover, because the needle is so sall in relation to the haystack, unless the queries are chosen judiciously, the needle will in all likelihood elude us. Our search therefore is liited to at ost queries, which we call the saple size. This, in the case at hand, is the axial nuber of dixie cups the search can turn over. We therefore iagine that the reote controlled toy helicopter flies over the gigantic table of dixie cups, hovers over a given cup, turns it over to exaine the pea under it, replaces the cup, and then oves on, repeating this action at ost ties. Within the constraints of this scenario, how do we find the target? The helicopter has queries in which to locate the target (i.e., to find a high-yield pea). At each query, the helicopter does three things: () It identifies a given pea by reoving and replacing the dixie cup over it. (2) It extracts inforation that bears on the pea s probability of belonging to the target. (3) It receives inforation for deciding where to fly next to exaine the next pea. The helicopter s search for the target ay therefore be characterized as the following 3 atrix, which we call the search atrix: Èx x2 x3 x Í Í α α 2 α 3 α ÍÎβ β2 β3 β Here the first row lists the actual peas sapled, the second row lists the inforation extracted about the peas that bears on their probability of belonging to the target, and the third row lists the inforation for deciding where the helicopter is to fly next. Moreover, each colun represents a single query, with the coluns listed in the order of search. A successful search is then one that on the basis of the search atrix explicitly identifies soe x i in the first row that belongs to the target. Note that in this general search scenario, given coluns ay query the sae eleent ore than once. Thus, for separate coluns that record x i, x j, x k, etc., these first-row eleents of the search atrix ight all be identical. Such repetitions

4 A General Theory of Inforation Cost 29 could occur if each tie the helicopter lifts a dixie cup and queries a pea, it can extract only partial inforation about it, and so the search ay need to query the pea again to obtain still ore inforation about it. We could have distinguished between queries that lift a dixie cup to access a pea and queries that subsequently extract further inforation about a pea once the dixie cup is lifted. But since one ay not want to query a pea iediately after having already queried it but rather wait until other peas have been queried (inforation about other peas ight help to elucidate inforation about the given pea), in the interest of generality it is best to allow for only one type of query, naely, a cobined query that lifts a dixie cup and then extracts inforation about the underlying pea. The search atrix is not identical with the search. Rather, the search atrix records key inforation about the progress of the search. Specifically, it records the eleents sapled fro the search space, including any repetitions (this inforation appears in the first row); it records inforation bearing on the probability that an eleent sapled belongs to the target (this inforation appears in the second row); and it records inforation for deciding where to saple next (this inforation appears in the third row). All this inforation contained in the search atrix coes to light through the activity of a search algorith. Success of the search therefore depends on how effectively the algorith uses as well as fills in the inforation contained in the search atrix. Does the search algorith, in sapling x i, have coplete eory of the prior inforation sapled? Or is it a Markov process with access only to the latest inforation sapled? Does the algorith contain additional inforation about the target so that regardless of the inforation in rows two and three, the algorith will, with high probability, output an x i that is in the target? Or is the target-inforation available to the algorith restricted entirely to the search atrix in the sense that its probability of successfully locating the target depends entirely on the inforation contained in those two rows [2]? The options here are wide and varied. We consider next several exaples of how the search atrix ight work in practice. Exaple.: Unifor rando sapling with perfect knowledge In this case, each x i is selected according to a unifor distribution across the dixie cups, each α i records whether x i belongs to the target ( for yes, 0 for no), and each β i directs the helicopter to take a unifor rando saple in locating the next point in the search space (that being x i+ ). The reference to perfect knowledge here signifies that for each query we know exactly whether the pea sapled (each x i )

5 30 W. A. Debski, W. Ewert and R. J. Marks II belongs to the target (in which case α i = ) or not (in which case α i = 0). If any α i equals, we can stop the search right there and produce x i as an instance of a successful search. Alternatively, we can fill out the search atrix rather than leave it incoplete, and then produce the first x i for which α i equals (producing siply x if none of the α i s equals ). Given that the proportion of high-yield peas (i.e., the target) has probability p, the probability that this search is successful is ( p). Exaple.2: Unifor rando sapling with zero knowledge In this case, as before, each x i is selected according to a unifor distribution across the dixie cups; oreover, each β i directs the helicopter to take a unifor rando saple in locating the next point in the search space (that being x i + ). This tie, however, exaining the peas reveals nothing about whether they belong to the target. This ight happen, for instance, if high-yield and low-yield peas are visually indistinguishable and we have no way of otherwise discriinating the (as we ight through genetic analysis or actually planting the). The reference to zero knowledge therefore signifies that for each query we know nothing about whether the pea sapled (x i ) belongs to the target. In this case, each of the α i s ay be treated as equal to 0. Given that the proportion of high-yield peas (i.e., the target) has probability p, the probability that this search successfully identifies a particular x i in the target is siply p. Accordingly, a saple size of greater than does nothing here to iprove on the probability of locating the target if we have no eans of obtaining knowledge about the peas we are sapling. Note that the probability that soe eleent in the first row of the search atrix belongs to the target is ( p). This is the probability of successful search as calculated in the previous exaple, which presupposed perfect knowledge. Nevertheless, for a search to be successful in the present exaple, it is not enough for the search atrix erely to have a target eleent appear in the first row. In addition, it ust be possible to explicitly identify one eleent in the first row taken to be the best candidate for belonging to the target. Moreover, because this is a needle-in-the-haystack proble, successful search requires that the candidate selected ust belong to the target with probability considerably larger than p. With zero knowledge about whether eleents in the first row of the search atrix belong to the target, however, no candidate selected fro that row stands a better chance of belonging to the target than any other. In that case, each such candidate has the very sall probability p of belonging to the target.

6 A General Theory of Inforation Cost 3 Exaple.3: Unifor rando sapling with partial knowledge In this exaple, as in the previous two, each x i, when first selected, follows a unifor distribution across the dixie cups. Yet, to deterine whether a given x i actually does belong to the target, two agricultural tests ay need to be perfored on it. The tests work as follows: if both yield a positive result (denoted by a ), then the candidate x i belongs to the target; if one or both yield a negative result (denoted by 0), then it does not belong to the target. Moreover, the perforance of each of these tests requires a single query. Thus, to deterine whether an x i that is in the target actually does belong to it, the dixie cup over it will have to be reoved and replaced twice, eaning that x i itself will appear twice in the top row of the search atrix, iplying that under those appearances the corresponding α i s will both be. On the other hand, if on either of the tests, the first query perfored yields a 0, then there s no point in perforing the other test, and x i need appear only once in the top row. Given a query that for the first appearance of x i yields an α i equal to, x i will need to be queried again to deterine whether it indeed belongs to the target. Once a given x i s inclusion in or exclusion fro the target is deterined, the next query is uniforly rando across the dixie cups. In this case, the probability p of hitting the target over queries will be strictly between the probabilities deterined in the last two exaples, i.e., p < p < ( p), where p is the zero-knowledge lower bound and ( p) is the perfect-knowledge upper bound. The exact value of p will depend on Bayesian considerations relating to how negative results on the two agricultural tests are distributed (in ters of prior probabilities) aong the non-target eleents. Exaple.4: Sooth gradient fitness with single peak In this case, we begin by turning over a randoly chosen dixie cup, exaining the pea under it (x ), and recording its fitness (α ). We assue that the fitness function over the peas has a sooth gradient (in other words, fitness does not zigzag up and down as we ove in a given direction over the large table, which is our search space) and that it has a single peak (in other words, any local axiu is also the [unique] global axiu). In this case, a hill-clibing strategy is appropriate, so each β i directs the helicopter to search in the neighborhood of the x i that, so far in the search, has attained the highest value of fitness α i, looking to increase the fitness still further. There s no reason in this case to repeatedly query a given pea since we assue that fitness can be precisely deterined in a given query and that fitness does not vary fro one query to another (in other words, the fitness function here is tie independent ). Once queries in this search have been carried out, we consult the search atrix and choose, as our best candidate for landing in

7 32 W. A. Debski, W. Ewert and R. J. Marks II the target, the x i that attains the highest value of fitness α i. The probability that such a search is successful will depend on the saple size, the initialization (i.e., the procedure for deciding where the search begins), the precise characteristics of the fitness function, and how efficiently the search saples points in the neighborhood of an already sapled point to iprove fitness (i.e., to clib the hill ). 2. General Targeted Search A precise atheatical characterization of the general search scenario described in the last section now looks as follows. Let Ω = {ω, ω 2,..., ω K, ω K+,..., ω N } be the search space, which we assue to be finite (this assuption can be relaxed and we have done so in other work, but doing so entails no substantive gain in generality). Let T = {ω, ω 2,..., ω K } be the target and define the probability p = K/N. A search of Ω for T then consists of the following 6-tuple: (ι, τ, O α, O β, A, ). The ites in this 6-tuple denote respectively the initiator, the terinator, the inspector, the navigator, the noinator, and the discriinator. Here is what these six ites ean: Initiator. The initiator ι, denoted by the Greek iota, starts the ball rolling. It is the procedure by which the search deterines where to begin. The initiator ι is responsible for x, and possibly additional ebers of the search space x 2 through x k, that appear as the first entries in the first row of the search atrix. In any searches the initiator does nothing ore than choose a single search space eleent (i.e., x ) at rando according to soe probability distribution. Terinator. The terinator τ, denoted by the Greek tau, provides a stop criterion for ending the search. Because all searches are liited to a axiu nuber of queries (i.e., the saple size), the terinator can always siply be identified with the policy to cut off the search after queries. In practice, however, terinators often end a search before the axial nuber of queries have been ade because the search is deeed to have achieved success before this axial nuber. In that case, the search atrix ay be incoplete, with issing entries in the coluns to the right of the last colun for which a point in the search space was queried. Without loss of generality, we can then fill up the coluns that are issing entries by repeating the last coplete colun. Alternatively, we can just leave the coluns epty. Inspector. The inspector O α is an oracle that, in querying a search-space entry, extracts inforation bearing on its probability of belonging to the target T. We assue that O α is a function apping into soe range of values capable of providing inforation about the degree to which ebers of the search space Ω give evidence of belonging to the target T. Quite often, the doain of O α is erely Ω, and O α aps

8 A General Theory of Inforation Cost 33 into {0,}, returning a if an eleent of Ω is in the target, 0 otherwise. Alternatively, O α ay ap into the singleton {0}, returning the sae eleent regardless of the eleent of Ω in question, thus providing zero inforation about target eleents. O α ay even assue isleading values, suggesting that search-space entries are in the target when they are not and vice versa. Besides taking on discrete values, O α ay also take on ore continuous values, signaling the degree to which a searchspace entry is likely to be included in a target, as with a fitness function. The possible fors that O α can take are wide and varied. Without loss of generality, however, we assue that the range of values that the inspector can take is finite. As the inspector, O α s task is to fill the second row of the search atrix and thus provide evidence about the degree to which corresponding eleents in the first row ay belong to the target. Accordingly, all the α i s in the second row take values in O α s range. Nevertheless, given that a single query ay not provide all the inforation that the inspector is capable of providing about a given eleent fro the search space, the inspector ay perfor ultiple queries on a given search-space eleent and ay even use inforation gained fro different previously queried eleents in answering the present query. Thus, given an eleent x i in the search space that s just been selected, its value as assigned by O α can depend on the entire partial atrix Èx xi- xi ** Í Í α α i- * **. ÍÎβ β i- * ** Here ellipses denote eleents of the search atrix that have been filled in, single asterisks denote individual issing entries, and double asterisks denote possibly ultiple issing entries. In the case at hand, O α uses the partial search atrix given here to deterine α i. If it ignores all entries of the partial search atrix prior to colun i, then we say that O α is Markov. If it deterines α i solely on the basis of x i, we say that O α operates without eory (otherwise, with eory). Navigator. Like the inspector O α, the navigator O β is an oracle. Given that we are at Èx xi- xi ** Í Í α α i- α i ** ÍÎβ βi- * ** in the search process, the navigator takes this partial search atrix and returns the value β i, which directs (navigates) the search as it attepts to locate the next entry in the search space (i.e., x i+ ). O β aps into a fixed range of values, which in the

9 34 W. A. Debski, W. Ewert and R. J. Marks II search atrix we denote by βs. As with the inspector, if O β ignores all entries of the partial search atrix prior to colun i, then we say that O β is Markov. If it deterines β i solely on the basis of x i and α i, we say that O β operates without eory (otherwise, with eory). The type of inforation that O β delivers can be quite varied. It can provide distance of search-space eleents fro the target. It can provide inforation about the soothness of fitness. It can provide inforation about how likely neighbors of a given search-space eleent are to be in the target. Moreover, it can cobine these types of inforation. Whereas the inspector O α confines itself to extracting inforation that bears on the probability of search-space eleents residing in the target, the navigator O β focuses on inforation that helps guide the search to the target. As with the inspector, we assue that the range of values the navigator ay take is finite. For (atheatically) sooth fitness functions, this will entail discretizing the values that the fitness function ay assue. Yet, because the degree to which searches can discriinate such inforation is always strictly liited (in practice, distinct easureents when sufficiently close becoe epirically indistinguishable), assuing a finite range of values for the navigator entails no loss of generality. Noinator. The noinator A is the update rule that, given a search atrix filled through to the i th colun and thus incorporating the ost current inforation fro the inspector and navigator, explicitly identifies (and thereby noinates ) the next eleent to be queried, naely x i+. Thus A takes us fro the search atrix to the updated search atrix Èx Í Í α ÍÎβ xi α i β i * ** * ** * ** Èx xi xi+ ** Í Í α α i * ** ÍÎβ β i * ** We denote the noinator by A (for algorith ) because, in consulting the inspector and navigator to deterine the next search-space eleent to be queried, it acts as the basic underlying algorith of the search, running through all the target candidates that the search will consider. We say that the noinator is Markov if its selection of x i+ depends solely on the i th colun of the search atrix. We say that it operates without eory if its selection of x i+ is independent of prior coluns of the search atrix.

10 A General Theory of Inforation Cost 35 To say that the noinator noinates an eleent x i+ based on the partial search atrix Èx Í Í α ÍÎβ x α β i i i * ** * ** * ** ay see to entail a loss of generality since in any searches (e.g., genetic algoriths and particle swars), ultiple candidates fro the search space tend to be generated in batches. Thus with genetic algoriths, for instance, all candidates of a given reproduction cycle appear at the sae tie. Accordingly, if, say, 00 offspring are generated at each reproduction cycle, the new partial search atrix is not but rather Èx xi xi+ ** Í * ** Í α α i ÍÎβ * ** βi Èx xi xi+ xi+ 00 ** Í Í α α i * ** * **. ÍÎβ β i * ** * ** Given this last atrix, we can then let the inspector and navigator fill in the coluns below x i+ to x i+00 one colun at a tie proceeding left to right. Alternatively, we can siply require the noinator to proceed one colun at a tie (thus taking a given batch of candidates one by one in sequence), letting the inspector and navigator fill in that colun before proceeding to the next. Both cases are atheatically equivalent. For soe searches, it akes better intuitive sense for the noinator to noinate a whole batch of search-space eleents at a tie. But this can always be ade equivalent to noinating one eleent of the batch at a tie until the entire batch is exhausted. For siplicity, we tend to adopt this latter approach. Another possibility is to change the search space so that each eleent consists of ultiple eleents fro the original search space (see exaple 3.5). Discriinator. Once a search atrix Èx x2 x3 x Í Í α α 2 α 3 α ÍÎβ β2 β3 β

11 36 W. A. Debski, W. Ewert and R. J. Marks II that s copatible with A has been fored, it s tie to decide which x i that appears in the first row is ost likely to belong to the target T. With a coplete search atrix in hand, it s not enough to suspect that soe entry soewhere in the first row belongs to T. For the search to be successful, we need to know which of these entries in fact belongs to T or, if definite knowledge of inclusion in T is not possible, then which of these entries is ore likely than the rest to belong to T. Choosing fro the first row of the search atrix the ost likely candidate that belongs to T is the job of the discriinator. As such, the discriinator is a function into the search space Ω fro possible search atrices (i.e., fro 3 atrices whose first row consists of eleents fro Ω, whose second row consists of eleents fro the range of the inspector, and whose third row consists of eleents fro the range of the navigator). For each such search atrix, the discriinator outputs the eleent x i in the first row that it regards as ost likely to belong to T. Discriinators can vary in quality. Self-defeating discriinators that, whenever possible, select first-row entries belonging outside the target are an option. For a given search atrix, such discriinators iniize the probability of successfully locating the target. Also an option are independent-knowledge discriinators that can identify whether a first-row entry belongs to the target with greater certainty than is possible siply on the basis of the inforation delivered by the inspector and navigator (inforation found in the second and third rows of the search atrix). Thus, the discriinator ight have access to a source of inforation about target inclusion that is less abiguous than what is available to the inspector and navigator. Such discriinators would thereby introduce inforation external to the search atrix to locate those eleents in the first row ost likely to belong to the target. By contrast, no-independent-knowledge discriinators would select x i fro the first row based solely on inforation contained in the second and third rows of the search atrix. Such variations aong discriinators are easily ultiplied and foralized. We leave doing so as an exercise to the reader. Although the discriinator as here described is a function fro coplete search atrices to the search space Ω, in fact we allow also to be a function fro partial search atrices to Ω, in keeping with the terinator s ability to stop a search when success in fewer than queries has likely been achieved. Recall that partial search atrices can always be filled up with redundant coluns and thus turned into coplete search atrices. Hence, allowing partial search atrices entails no gain in generality, nor does restricting ourselves to coplete search atrices entail a loss of generality. Each of the six coponents of a search S = (ι, τ, O α, O β, A, ) can be stochastic. Thus, the initiator ight choose x according to soe probability distribution.

12 A General Theory of Inforation Cost 37 Likewise, the terinator ay end the search depending on chance factors relevant to success being achieved. The inspector and navigator, at any given stage in foring the search atrix, ay draw on a stochastic source to randoize its outputs. So too, the noinator and discriinator ay choose their candidates in part randoly. It follows that a search S can be represented as a rando search atrix consisting of three discrete stochastic processes X, Y, and Z: ÊX X2 X ˆ S = ÁY Y2 Y. Á ËZ Z Z 2 Here X represents the search-space eleents delivered by the noinator (or the initiator for X ), Y the corresponding outputs of the inspector, and Z the corresponding outputs of the navigator. X therefore takes values in Ω, Y in the range of O α, and Z in the range of O β. Alternatively, S can be conceived as a vector-valued stochastic process W where each in which case ÊXi ˆ W = Á i Y, i Á ËZ S = W W W ( ) i 2. Applying the discriinator to this rando search atrix thus yields an Ω-valued rando variable (S), which we denote by X S. As an Ω-valued rando variable, X S therefore induces a probability distribution µ s on Ω that entirely characterizes the probability of S successfully locating the target T. In this way, an arbitrary search S can be represented as a single probability distribution or easure µ s on the original search space Ω. This representation will be essential throughout the sequel. As noted at the start of this paper, this representation of searches as probability easures is central to our foralization of conservation of inforation. If it were obvious that searches could in general be represented this way, we ight just as well have oitted these first three sections. But given that a general characterization of targeted search is itself a point at issue in deterining the scope and

13 38 W. A. Debski, W. Ewert and R. J. Marks II validity of conservation of inforation, these preliinary sections were in fact necessary. Logically speaking, however, these sections coe up only tangentially in the sequel by guaranteeing that searches can indeed be represented as probability easures. 3. Search Exaples In this section, we consider several further exaples of targeted search, expanding on the exaples given at the end of section. Exaple 3.: Unifor rando sapling with perfect knowledge and without replaceent In the last section, we considered a search of queries in which, at each query, the entire search space was sapled uniforly. This led to independent and identically distributed unifor rando variates in the first row of the search atrix, 0s and s in the second row depending on whether the corresponding entry in the first row was respectively outside or inside the target, and in the third row a directive siply to continue unifor rando sapling. The discriinator in this case siply looked for a first-row entry with a directly below it in the second row. Accordingly, with unifor probability p = K /N of the target T in the search space Ω, we calculated the probability of successful search at ( p). This probability, however, assues that the first row of the search atrix was sapled with replaceent and thus ay repeat eleents of the search space. We can, on the other hand, have the navigator direct the search to avoid eleents in the search space previously queried (this iplies a eory of previously queried eleents). If all other aspects of the search are kept the sae, the search space is then sapled without replaceent so that each query is unifor with respect to eleents of the search space yet to be queried. This sapling procedure yields a hypergeoetric distribution. Thus for Ω = {ω, ω 2,..., ω K, ω K+,..., ω N }, T = {ω, ω 2,..., ω K }, p = K /N, and not exceeding N K, the probability that this search locates the target is then ÊN - Kˆ Á Ë -. ÊNˆ Á Ë

14 A General Theory of Inforation Cost 39 Moreover, if N is uch larger than, this probability approxiately equals the with replaceent probability ( p), underscoring the well-known fact that sapling without replaceent only negligibly iproves the efficiency of search copared to sapling with replaceent unless the saple size is large [3]. Exaple 3.2: Easter egg hunt Iagine a very large two-diensional grid with Easter eggs hidden under various squares of the grid. You are able to ove around the grid by going fro one square to an adjacent square. Thus you can ove one square vertically, horizontally, or diagonally, like a king in chess: You start out on a randoly chosen square, which is deterined by the initiator. The terinator gives you at ost squares to exaine. When you are on a given square, the inspector tells you whether you are over an Easter egg (by saying yes ) or not (by saying no ). If yes, uncover the square on which you are standing, locate the egg underneath, and end the search. Given that you have oved fro one square to another with neither being over an Easter egg, the navigator tells you whether the square you are currently on is closer to, the sae distance fro, or further fro the nearest Easter egg (by saying warer, sae, or colder ; distance between squares A and B is calculated as iniu nuber of steps needed to reach B fro A). Notice that the navigator cannot provide such inforation until the initiator has designated the first square and the noinator has designated the second. Thus, for the very first square chosen by the initiator, the navigator siply puts down sae.

15 40 W. A. Debski, W. Ewert and R. J. Marks II If for your current square the navigator says warer, the noinator says to choose that square fro your iediate neighborhood that takes you in the sae direction as your last ove. If for your current square the navigator says sae, the noinator says to choose at rando a square that you have not yet visited in the iediate neighborhood of the current square. If for your current square the navigator says colder, the noinator says to return to the previous square and randoly choose a square in its iediate neighborhood that you have not yet visited. Proviso: the noinator ignores any colun with colder, in subsequent search treating it as though it were not part of the search atrix except for not revisiting its square when sapling nearest neighbors. This proviso prevents the search fro getting stuck. Finally, the discriinator returns the first square under which an Easter egg was found if an egg is indeed found; otherwise, it returns the square chosen by the initiator. The Easter egg hunt so described falls within our general fraework for targeted search. Exaple 3.3: Copetitive search In copetitive search, eleents of the search space Ω are conceived as players whose skill can be evaluated and ranked according to certain perforance criteria. Evolutionary coputing typically eploys a single perforance criterion given by a fitness function. Fitness thus provides a single-objective easure of optiality one and only one thing needs to be optiized, and when it is optiized we have the undisputed best player. In any circustances, however, optiality is ulti-objective, that is, there are several copeting things we are trying to optiize siultaneously, where a rise in one leads to a drop in another. Optiization with ultiple perforance criteria thus requires a balancing or coproise aong rival objectives. How these criteria are balanced deterines what we regard as the best players, that is to say, the target. Just what constitutes the right balance of perforance criteria is not written in stone but constitutes a judgent call [4]. Consider a search space consisting of all college en s basketball players in a given year. Professional NBA teas are seeking the best basketball players in this search space the very best presuably being the player picked first in the first round of the NBA draft. But what deterines the best players? Many perforance criteria coe to ind: speed, height, field-goal percentage, three-point percentage, average nuber of rebounds per gae, average nuber of blocked shots per gae, average nuber of assists

16 A General Theory of Inforation Cost 4 per gae, etc. etc. All these perforance criteria need to be suitably cobined to deterine who are the best players and thus what constitutes the target of the search. Soe years, this balancing of perforance criteria is straightforward, so that one player stands out head and shoulders above the rest. At other ties, different teas ay have different needs, leading the to ephasize certain perforance criteria over others, so that no player is copletely doinant and no target is universally agreed upon. How we cobine and balance perforance criteria depends on our needs and interests. Suppose, to change exaples, you are a college adissions officer. Your search space is all graduating high school students and you are trying to find those who will thrive at your institution. This is your search, that is, to find the right students. Prospective students need to take the Scholastic Aptitude Test (SAT). The test provides two ain scores, a verbal and a ath score (each varying between 200, which is worst, and 800, which is best) [5]. Each of these scores corresponds to a perforance criterion and requires a search query. With these two queries perfored on each high school student, how do you now select the best students for your school (leaving aside other perforance criteria such as GPA and recoendations)? Do you add the scores together, as is coonly done? Or do you weight one ore than the other and, if so, how? If your school focuses ainly on liberal arts, you will want to weight the verbal portion ore strongly than the ath portion. Thus, even though you ay want to see a cobined score of 200 or better, you will favor students who get a 750 verbal/450 ath over students who get a 450 verbal/750 ath. If, on the other hand, yours is an engineering school, then you will prefer the latter over the forer. Soe schools don t discriinate the two scores but siply add the to give a cobined perforance easure for the test. Besides adding scores or weighting the, one can introduce arbitrary cut-offs. Thus, one ight require that no student be aditted who perfors less than 500 on either test, thereby ensuring that both verbal and ath scores exceed a certain threshold. This suggests a axi-in approach to cobining perforance easures: take the iniu of the two SAT scores and try to recruit those students whose inia are axial. The precise forulation of such cobined perforance easures is straightforward. The trick is to find the right cobination that suits one s purposes. In such exaples of copetitive search, evaluating how a search space eleent fares with respect to the various perforance criteria is the job of the inspector. To evaluate a search space eleent s copetitiveness, the inspector ay need to query it several ties. Soeties, however, a single query is enough.

17 42 W. A. Debski, W. Ewert and R. J. Marks II In basketball, for instance, a player whose free throw percentage is less than 0 percent can be eliinated fro consideration for the NBA draft without needing to consult any other perforance criteria. Alternatively, a player who scores over 00 points a gae on average (a perforance achieved just once in the entire history of the NBA, as it is, by Wilt Chaberlain) will rise to the very top of the player pool even if we don t know any of his other stats. Knowing which queries to ake conditional on which queries have already been ade is essential to constructing an effective copetitive search. Exaple 3.4: Tournaent play Tournaent play is a special case of copetitive search in which the players display their copetitive abilities by playing against each other. In tournaent play, there are as any perforance criteria as there are players, each player s copetitiveness being gauged by how well one perfors in relation to the others. Basketball is an exaple of tournaent play, though in this case the unit of search is not the individual player (as it was in the last exaple) but the tea. In chess, on the other hand, the unit of search tends to be the individual player (though tea play is also known, as when local chess clubs play each other). Tournaent play is typically represented by a square anti-syetric atrix with blanks down the diagonal (players don t play theselves) and opposite outcoes irrored on either side of the diagonal. For instance, in the St. Petersburg Chess Congress of 909, the tournaent atrix was as follows [6]: St. Petersburg Rubinstein * ½ ½ ½ ½ 0 ½ 4½ 875 Rubles 2 Lasker 0 * ½ ½ ½ 0 4½ 875 Rubles 3 Spielann 0 ½ * 0 ½ ½ ½ ½ 0 ½ ½ ½ 475 Rubles 4 Duras * 0 ½ 0 ½ Rubles 5 Bernstein ½ ½ * 0 0 ½ ½ ½ 0½ 90 Rubles 6 Teichann ½ * 0 ½ ½ ½ ½ ½ ½ ½ 0 20 Rubles 7 Perlis ½ 0 0 ½ 0 * ½ ½ ½ ½ ½ 0 0 9½ 80 Rubles 8 Cohn 0 0 ½ ½ ½ * 0 0 ½ ½ 0 ½ ½ ½ 9 40 Rubles 9 Schlechter 0 ½ 0 ½ 0 ½ ½ * 0 0 ½ 0 ½ 9 40 Rubles 0 Salwe 0 0 ½ 0 0 ½ 0 0 * ½ ½ Rubles Tartakower ½ 0 ½ 0 ½ ½ 0 ½ * ½ ½ 8½ 2 Mieses 0 0 ½ ½ 0 * ½ 0 8½ 3 Dus Chotiirsky 0 0 ½ 0 0 ½ 0 0 ½ * ½ ½ ½ Forgács ½ ½ ½ * ½ ½ ½ 0 ½ 7½ 5 Burn ½ 0 ½ ½ ½ ½ ½ 0 ½ ½ * ½ ½ Vidar ½ ½ ½ 0 0 ½ ½ 0 * ½ Speyer 0 0 ½ 0 ½ 0 ½ ½ ½ ½ * ½ ½ 6 8 Von Freyan 0 0 ½ 0 ½ 0 0 ½ ½ 0 ½ * 0 5½ 9 Znosko Borovsky ½ ½ 0 0 ½ ½ * 5

18 A General Theory of Inforation Cost 43 Eanuel Lasker, who tied with Akiba Rubinstein for first place, was at the tie the world chapion. Even so, Rubinstein, though he never played Lasker in a title atch (back then challengers had to raise sufficient funds before they could arrange such a atch with the world chapion), was in the five years preceding World War I regarded as the strongest player in the world (in 92 he won five international tournaents in a row, a feat unparalleled). Yet both Rubinstein and Lasker were defeated by Dus Chotirsky, a chess player who would be lost to history except for this feat. In chess tournaents, winners are decided by suing perforance across all gaes (assigning to a win, ½ to a draw, and 0 to a loss) and then selecting the player(s) with the highest total. This is standard practice, but one can iagine variations of it. We ight want siply to focus on victories and count the. In that case, Lasker would have won the tournaent outright (with 3 victories), Rubinstein would have coe in second (with 2 victories), and Duras would have coe in third (with 0 victories). On the other hand, we ight want to choose the victor based on fewest losses. In that case Rubinstein would have been the outright victor (with a single loss), Lasker would have taken second (with 2 losses), and Spielann would have taken third (with 3 losses). Other options for balancing perforance criteria in tournaent play are possible as well. For instance, Chotirsky, for having defeated the two top perforing players as deterined by conventional tournaent standards, ight have been rewarded extra points for doing so. In tournaent play, exhaustive search eans each player playing all the other players and recording the outcoes. Most chess tournaents, however, have so any players that an exhaustive search is not possible. The St. Petersburg tournaent was a select invitational eet. Most tournaents are open to the chess counity. In such tournaents, players are initially atched in line with their official chess ratings (600 for aateur, 2000 for expert, 2200 for aster, 2500 for grandaster), with weaker players initially playing stronger players so that the best players don t cancel each other out early. Then, as the rounds proceed (typically between six to eight rounds total), players with the sae tournaent record, or as close a record as available, are atched. Note that weaker players, as gauged by their rating coing into the tournaent, tend at each round to be atched with stronger players. At the end of the tournaent, one s wins and draws are sued ( for a win, ½ for a draw). The tournaent winner(s) are then decided in ters of this raw total as well as an algorith that, in the case of ties, takes into account the strength, as deterined by chess rating, of one s opponents. In ters of the search atrix, especially when the pool of tournaent players is quite large, each player can play only a few other players (each gae played constituting a single query, the outcoes of these gaes being noted by the inspector). Given that one wants to discover the strongest players (for soe

19 44 W. A. Debski, W. Ewert and R. J. Marks II specified ethod of balancing perforance criteria, these players, taken jointly, constitute the target), the search needs to be judicious in the choice of players it uses to query individual players. Is it ore effective, given at ost queries, to query as any players as possible by playing the against only one or a few other players? Or is it better to hone in on a few players and put the through their paces by having the play a wide cross-section of other players? It all depends. It depends on whether player strength tends to function transitively (if A is able to defeat B and B is able to defeat C, is A able to defeat C?). It depends on whether, in the case of a single player testing other players, this player is strong or weak. The discriinating power of strong players is iportant in tournaent play. A strong player, by definition, loses only to a few players (i.e., to other strong players), and thus will clearly discriinate strong fro weak players. In contrast, a weak player, by losing to ost players, will fail to discriinate all but the weakest players. To change the gae fro chess to baseball, if the test of a tea is whether it perfors well against the New York Yankees, any tea that does reasonably well against the could rightly be considered excellent. But if the test tea is drawn fro the local little league, then it would not provide a useful way of deterining teas of national excellence. But notice, using the New York Yankees as a test tea ay not be very useful either a tea that beats or keeps it close with the Yankees is surely top notch, but if all the teas tested fail iserably against the Yankees, we ay learn nothing about their relative strength. Players that are overly strong or overly weak are poor discriinators of play excellence. In su, tournaent play is a special case of copetitive search that fits within our general search fraework but in which perforance is assessed by the players (i.e., the search space eleents) playing each other directly and then by noting the winner and, if applicable, the argin of victory [7]. Exaple 3.5: Population search For soe searches, the concern is not siply with individuals exhibiting certain characteristics but with whole populations exhibiting those characteristics. Thus, in population genetics, the eergence of a trait in a lone individual is not enough. Traits can coe and go within a population. The point of interest is when a trait gets fixed aong enough ebers of a population to take hold and perpetuate. To represent such a scenario, we ight iagine all the ebers of a population as drawn fro a set of individuals Ω. Moreover, Ω ay contain a target T

20 A General Theory of Inforation Cost 45 consisting of all ebers exhibiting the trait(s) in question. Yet the actual search is not for T in Ω but for sets, whether ordered or unordered, of ebers fro Ω ; what s ore, the target will consist of such sets that have a sufficient proportion of ebers fro T. Thus, in a standard evolutionary coputing scenario, the actual search space Ω ight consist of all 00-tuples of eleents drawn fro Ω (each eleent of the first row of the search atrix would belong to this Ω). Moreover, the actual target T ight, in this case, consist of 00-tuples drawn fro Ω for which 75 or ore of their eleents belong to T. In this case, successful search would require 75 percent of the population to have acquired the given trait(s). Many ways of transitioning fro Ω to Ω and T to T are possible here depending on the population size (is it fixed or variable?), on whether the order of eleents in the population is iportant, and on the threshold that deterines whether individually deterined characteristics are widespread enough for the population to have properly acquired the. Even though the natural search space ay see to be one we have called Ω, representing the search within the general fraework outlined in this paper ay require identifying another space, which we called Ω. The actual target we are trying to locate would thus belong not to Ω but to Ω. Note that such an Ω will invariably have ore structure than Ω, even supplying a etric of coparison in ters of how any ebers of Ω the ebers of Ω share. 4. Inforation and Efficiency Measures In a general theory of search that avoids arbitrary assuptions about underlying probability distributions, unifor probabilities nonetheless play a salient role. Consider our general set-up, a search space Ω = {ω, ω 2,..., ω K, ω K+,..., ω N } with target T = {ω, ω 2,..., ω K }, where the target has unifor probability U(T) = p = K / N = T / Ω, where * is the cardinality of *. In any such scenario, we can always do at least as good as take a single unifor rando saple and thereby attain a target eleent with probability p. We ight conduct our search to iprove this probability or we ight conduct it to diinish this probability. The natural probability distribution attaching to Ω, given the idiosyncrasies of this search space, ay be very different fro unifor. But it is always, in principle, possible to enuerate the eleents of a finite space Ω and choose one of the randoly so that no eleent is privileged over any other. Unifority, even if destined to iss the target in any nontrivial search, is always an option. To take a single unifor rando variate fro the search space Ω will be called the null search. This search becoes the baseline against which we

21 46 W. A. Debski, W. Ewert and R. J. Marks II copare all other searches. Any search different fro the null search will be called an alternative search [8]. The null search induces the unifor probability distribution U on Ω (see section 2). This is the probability easure we get by setting our saple size at = and letting the discriinator act on the corresponding 3 search atrix whose first row eleent is siply a uniforly chosen eleent of the search space. In practice, p is so sall that a null search over Ω for T is extreely unlikely to succeed. Success therefore deands that in place of the null search, an alternative search S be ipleented that succeeds with a probability q that is considerably larger than p. The search S thus induces, in the notation of section 2, a probability distribution µ s on Ω that entirely characterizes the probability of S successfully locating the target T. For siplicity, we denote µ s siply by µ. In this way, an alternative search S reduces to a single probability distribution µ on the original search space Ω where the probability of the target is µ (T ) = q. In coparing null and alternative searches, it is convenient to convert probabilities to inforation easures (note that all logariths in the sequel are to the base 2). We therefore define the endogenous inforation I Ω as log(p), which easures the inherent difficulty of a blind or null search in exploring the underlying search space Ω to locate the target T. We then define the exogenous inforation IS as log(q), which easures the difficulty of the alternative search S in locating the target T. And finally, we define the active inforation I + as the difference between the endogenous and exogenous inforation: I + = I Ω I S = log(q/p). Active inforation therefore easures the inforation that ust be added (hence the plus sign in I + ) on top of a null search to raise an alternative search s probability of success by a factor of q/p. In the null search, the saple size is fixed at (a single unifor rando variate is taken) whereas in the alternative search the saple size is ( queries are ade). If we ake explicit, then we can define q as the probability that the alternative search locates the target in queries, and write I s = log(q ) and I + = I Ω I s. The behavior of I + as a function of now provides a easure of the efficiency of search. Suppose, for instance, that S conducts its search by taking independent and identically distributed rando variates. In that case, assuing to be uch less than /p, q = ( p) is approxiately equal to p, and I + is approxiately log(). If, instead, S conducts its search by, at each query, cutting in half the search space ( interval halving ), then the probability of finding the target increases by a factor of 2 for every query, and I + is approxiately (i.e., the active inforation is linear rather than logarithic in ). Interval halving is therefore a uch ore efficient search strategy (if it can be ipleented) than unifor rando sapling. I +, as a function of, therefore easures the efficiency of the search.

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields Finite fields I talked in class about the field with two eleents F 2 = {, } and we ve used it in various eaples and hoework probles. In these notes I will introduce ore finite fields F p = {,,...,p } for

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

MULTIPLAYER ROCK-PAPER-SCISSORS

MULTIPLAYER ROCK-PAPER-SCISSORS MULTIPLAYER ROCK-PAPER-SCISSORS CHARLOTTE ATEN Contents 1. Introduction 1 2. RPS Magas 3 3. Ites as a Function of Players and Vice Versa 5 4. Algebraic Properties of RPS Magas 6 References 6 1. Introduction

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact Physically Based Modeling CS 15-863 Notes Spring 1997 Particle Collision and Contact 1 Collisions with Springs Suppose we wanted to ipleent a particle siulator with a floor : a solid horizontal plane which

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Midterm 1 Sample Solution

Midterm 1 Sample Solution Midter 1 Saple Solution NOTE: Throughout the exa a siple graph is an undirected, unweighted graph with no ultiple edges (i.e., no exact repeats of the sae edge) and no self-loops (i.e., no edges fro a

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007 Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers Ocean 40 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers 1. Hydrostatic Balance a) Set all of the levels on one of the coluns to the lowest possible density.

More information

Chapter 6: Economic Inequality

Chapter 6: Economic Inequality Chapter 6: Econoic Inequality We are interested in inequality ainly for two reasons: First, there are philosophical and ethical grounds for aversion to inequality per se. Second, even if we are not interested

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

What is Probability? (again)

What is Probability? (again) INRODUCTION TO ROBBILITY Basic Concepts and Definitions n experient is any process that generates well-defined outcoes. Experient: Record an age Experient: Toss a die Experient: Record an opinion yes,

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

ma x = -bv x + F rod.

ma x = -bv x + F rod. Notes on Dynaical Systes Dynaics is the study of change. The priary ingredients of a dynaical syste are its state and its rule of change (also soeties called the dynaic). Dynaical systes can be continuous

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1. Notes on Coplexity Theory Last updated: October, 2005 Jonathan Katz Handout 7 1 More on Randoized Coplexity Classes Reinder: so far we have seen RP,coRP, and BPP. We introduce two ore tie-bounded randoized

More information

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version

A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version Made available by Hasselt University Library in Docuent Server@UHasselt Reference (Published version): EGGHE, Leo; Bornann,

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Measures of average are called measures of central tendency and include the mean, median, mode, and midrange.

Measures of average are called measures of central tendency and include the mean, median, mode, and midrange. CHAPTER 3 Data Description Objectives Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance,

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

Note-A-Rific: Mechanical

Note-A-Rific: Mechanical Note-A-Rific: Mechanical Kinetic You ve probably heard of inetic energy in previous courses using the following definition and forula Any object that is oving has inetic energy. E ½ v 2 E inetic energy

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

arxiv:math/ v1 [math.co] 22 Jul 2005

arxiv:math/ v1 [math.co] 22 Jul 2005 Distances between the winning nubers in Lottery Konstantinos Drakakis arxiv:ath/0507469v1 [ath.co] 22 Jul 2005 16 March 2005 Abstract We prove an interesting fact about Lottery: the winning 6 nubers (out

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

Module #1: Units and Vectors Revisited. Introduction. Units Revisited EXAMPLE 1.1. A sample of iron has a mass of mg. How many kg is that?

Module #1: Units and Vectors Revisited. Introduction. Units Revisited EXAMPLE 1.1. A sample of iron has a mass of mg. How many kg is that? Module #1: Units and Vectors Revisited Introduction There are probably no concepts ore iportant in physics than the two listed in the title of this odule. In your first-year physics course, I a sure that

More information

4 = (0.02) 3 13, = 0.25 because = 25. Simi-

4 = (0.02) 3 13, = 0.25 because = 25. Simi- Theore. Let b and be integers greater than. If = (. a a 2 a i ) b,then for any t N, in base (b + t), the fraction has the digital representation = (. a a 2 a i ) b+t, where a i = a i + tk i with k i =

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

A Markov Framework for the Simple Genetic Algorithm

A Markov Framework for the Simple Genetic Algorithm A arkov Fraework for the Siple Genetic Algorith Thoas E. Davis*, Jose C. Principe Electrical Engineering Departent University of Florida, Gainesville, FL 326 *WL/NGS Eglin AFB, FL32542 Abstract This paper

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

On the Existence of Pure Nash Equilibria in Weighted Congestion Games

On the Existence of Pure Nash Equilibria in Weighted Congestion Games MATHEMATICS OF OPERATIONS RESEARCH Vol. 37, No. 3, August 2012, pp. 419 436 ISSN 0364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/10.1287/oor.1120.0543 2012 INFORMS On the Existence of Pure

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information