Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Size: px
Start display at page:

Download "Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations"

Transcription

1 Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL Abstract Despite the fact that approxiate coputations have coe to doinate any areas of coputer science, the field of progra transforations has focused alost exclusively on traditional seanticspreserving transforations that do not attept to exploit the opportunity, available in any coputations, to acceptably trade off accuracy for benefits such as increased perforance and reduced resource consuption. We present a odel of coputation for approxiate coputations and an algorith for optiizing these coputations. The algorith works with two classes of transforations: substitution transforations (which select one of a nuber of available ipleentations for a given function, with each ipleentation offering a different cobination of accuracy and resource consuption) and sapling transforations (which randoly discard soe of the inputs to a given reduction). The algorith produces a ( + ) randoized approxiation to the optial randoized coputation (which iniizes resource consuption subject to a probabilistic accuracy specification in the for of a axiu expected error or axiu error variance). Categories and Subject Descriptors D.3.4 [Prograing Languages]: Processors optiization; G.3 [Probability and Statistics]: Probabilistic Algoriths; F.2. [Analysis of Algoriths and Proble Coplexity]: Nuerical Algoriths and Probles General Ters Algoriths, Design, Perforance, Theory Keywords Optiization, Error-Tie Tradeoff, Discretization, Probabilistic. Introduction Coputer science was founded on exact coputations with discrete logical correctness requireents (exaples include copilers and traditional relational databases). But over the last decade, approxiate coputations have coe to doinate any fields. In contrast to exact coputations, approxiate coputations aspire only to produce an acceptably accurate approxiation to an exact (but in any cases inherently unrealizable) output. Exaples include achine learning, unstructured inforation analysis and retrieval, and lossy video, audio and iage processing. Perission to ake digital or hard copies of all or part of this work for personal or classroo use is granted without fee provided that copies are not ade or distributed for profit or coercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific perission and/or a fee. POPL 2, January 25 27, 202, Philadelphia, PA, USA. Copyright c 202 ACM /2/0... $0.00 Despite the proinence of approxiate coputations, the field of progra transforations has reained focused on techniques that are guaranteed not to change the output (and therefore do not affect the accuracy of the approxiation). This situation leaves the developer solely responsible for anaging the approxiation. The result is inflexible coputations with hard-coded approxiation choices directly ebedded in the ipleentation.. Accuracy-Aware Transforations We investigate a new class of transforations, accuracy-aware transforations, for approxiate coputations. Given a coputation and a probabilistic accuracy specification, our transforations change the coputation so that it operates ore efficiently while satisfying the specification. Because accuracy-aware transforations have the freedo to change the output (within the bounds of the accuracy specification), they have a uch broader scope and are therefore able to deliver a uch broader range of benefits. The field of accuracy-aware transforations is today in its infancy. Only very recently have researchers developed general transforations that are designed to anipulate the accuracy of the coputation. Exaples include task skipping [27, 28], loop perforation [4, 23, 24, 3], approxiate function eoization [6], and substitution of ultiple alternate ipleentations [2, 3, 2, 33]. When successful, these transforations deliver progras that can operate at ultiple points in an underlying accuracy-resource consuption tradeoff space. Users ay select points that iniize resource consuption while satisfying the specified accuracy constraints, axiize accuracy while satisfying specified resource consuption constraints, or dynaically change the coputation to adapt to changes (such as load or clock rate) in the underlying coputational platfor [2, 4]. Standard approaches to understanding the structure of the tradeoff spaces that accuracy-aware transforations induce use training executions to derive epirical odels [2, 3, 4, 24, 27, 28, 3, 33]. Potential pitfalls include odels that ay not accurately capture the characteristics of the transfored coputation, poor correlations between the behaviors of the coputation on training and production inputs, a resulting inability to find optial points in the tradeoff space for production inputs, and an absence of guaranteed bounds on the agnitude of potential accuracy losses..2 Our Result We present a novel analysis and optiization algorith for a class of approxiate coputations. These coputations are expressed as a tree of coputation nodes and reduction nodes. Each coputation node is a directed acyclic graph of nested function nodes, each of which applies an arbitrary function to its inputs. A reduction node applies an aggregation function (such as in, ax, or ean) to its inputs.

2 We consider two classes of accuracy-aware transforations. Substitution transforations replace one ipleentation of a function node with another ipleentation. Each function has a propagation specification that characterizes the sensitivity of the function to perturbations in its inputs. Each ipleentation has resource consuption and accuracy specifications. Resource consuption specifications characterize the resources (such as tie, energy, or cost) each ipleentation consues to copute the function. Accuracy specifications characterize the error that the ipleentation introduces. Sapling transforations cause the transfored reduction node to operate on a randoly selected subset of its inputs, siultaneously eliinating the coputations that produce the discarded inputs. Each sapling transforation has a sapling rate, which is the ratio between the size of the selected subset of its inputs and the original nuber of inputs. Together, these transforations induce a space of progra configurations. Each configuration identifies an ipleentation for every function node and a sapling rate for every reduction node. In this paper we work with randoized transforations that specify a probabilistic choice over configurations. Our approach focuses on understanding the following technical question: What is the optial accuracy-resource consuption tradeoff curve available via our randoized transforations? Understanding this question akes it possible to realize a variety of optiization goals, for exaple iniizing resource consuption subject to an accuracy specification or axiizing accuracy subject to a resource consuption specification. The priary technical result in this paper is an optiization algorith that produces a ( + )-approxiation to the optial randoized coputation (which iniizes resource consuption subject to a probabilistic accuracy specification in the for of a axiu expected error or axiu error variance). We also discuss how to realize a variety of other optiization goals..3 Challenges and Solutions Finding optial progra configurations presents several algorithic challenges. In particular: Exponential Configurations: The nuber of progra configurations is exponential in the size of the coputation graph, so a brute-force search for the best configuration is coputationally intractable. Randoized Cobinations of Configurations: A transfored progra that randoizes over ultiple configurations ay substantially outperfor one that chooses any single fixed configuration. We thus optiize over an even larger space the space of probability distributions over the configuration space. Global Error Propagation Effects: Local error allocation decisions propagate globally throughout the progra. The optiization algorith ust therefore work with global accuracy effects and interactions between errors introduced at the nodes of the coputation graph. Nonlinear, Nonconvex Optiization Proble: The running tie and accuracy of the progra depend nonlinearly on the optiization variables. The resulting optiization proble is nonlinear and nonconvex. We show that, in the absence of reduction nodes, one can forulate the optiization proble as a linear progra, which allows us to obtain an exact optiization over the space of probability distributions of configurations in polynoial tie. The question becoes uch ore involved when reduction nodes coe to the picture. In this case, we approxiate the optial tradeoff curve, but to a ( + ) precision for an arbitrarily sall constant > 0. Our algorith has a running tie that is polynoially dependent on. It is therefore a fully polynoial-tie approxiation schee (FPTAS). Our algorith tackles reduction nodes one by one. For each reduction node, it discretizes the tradeoff curve achieved by the subprogra that generates the inputs to the reduction node. This discretization uses a special bi-diensional discretization technique that is specifically designed for such tradeoff probles. We next show how to extend this discretization to obtain a corresponding discretized tradeoff curve that includes the reduction node. The final step is to recursively cobine the discretizations to obtain a dynaic prograing algorith that approxiates the optial tradeoff curve for the entire progra. We note that the optiization algorith produces a weighted cobination of progra configurations. We call such a weighted cobination a randoized configuration. Each execution of the final randoized progra chooses one of these configurations with probability proportional to its weight. Randoizing the transfored progra provides several benefits. In coparison with a deterinistic progra, the randoized progra ay be able to deliver substantially reduced resource consuption for the sae accuracy specification. Furtherore, randoization also siplifies the optiization proble by replacing the discrete search space with a continuous search space. We can therefore use linear progras (which can be solved efficiently) to odel regions of the optiization space instead of integer progras (which are, in general, intractable)..4 Potential Applications A precise understanding of the consequences of accuracy-aware transforations will enable the field to ature beyond its current focus on transforations that do not change the output. This increased scope will enable researchers in the field to attack a uch broader range of probles. Soe potential exaples include: Sublinear Coputations On Big Data: Sapling transforations enable the optiization algorith to autoatically find sublinear coputations that process only a subset of the inputs to provide an acceptably accurate output. Over the past decade, researchers have developed any sublinear algoriths [29]. Accuracy-aware transforations hold out the proise of autoating the developent of any of these algoriths. Increentalized and Online Coputations: Many algoriths can be viewed as converging towards an optial exact solution as they process ore inputs. Because our odel of coputation supports such coputations, our techniques ake it possible to characterize the accuracy of the current result as the coputation increentally processes inputs. This capability opens the door to the autoatic developent of increentalized coputations (which increentally saple available inputs until the coputation produces an acceptably accurate result) and online coputations (which characterize the accuracy of the current result as the coputation increentally processes dynaically arriving inputs). Sensor Selection: Sensor networks require low power, low cost sensors [32]. Accuracy-aware transforations ay allow developers to specify a sensor network coputation with idealized lossless sensors as the initial function nodes in the coputation. An optiization algorith can then select sensors that iniize power consuption or cost while still providing acceptable accuracy. Data Representation Choices: Data representation choices can have draatic consequences on the aount of resources (tie, silicon area, power) required to anipulate that data [0]. Giving an optiization algorith the freedo to adjust the ac-

3 curacy (within specified bounds) ay enable an infored autoatic selection of less accurate but ore appropriate data representations. For exaple, a copiler ay autoatically replace an expensive floating point representation with a ore efficient but less accurate fixed point representation. We anticipate the application of this technology in both standard copilers for icroprocessors as well as hardware synthesis systes. Dynaic Adaptation In Large Data Centers: The aount of coputing power that a large data center is able to deliver to individual hosted coputations can vary dynaically depending on factors such as load, available power, and the operating teperature within the data center (a rise in teperature ay force reductions in power consuption via clock rate drops). By delivering coputations that can operate at ultiple points in the underlying accuracy-resource consuption tradeoff space, accuracy-aware transforations open up new strategies for adapting to fluctuations. For exaple, a data center ay respond to load or teperature spikes by running applications at less accurate but ore efficient operating points [2]. Successful Use of Mostly Correct Coponents: Many faulty coponents operate correctly for alost all inputs. By perturbing inputs and coputations with sall aounts of rando noise, it is possible to ensure that, with very high probability, no two executions of the coputation operate on the sae values. Given a way to check if a fault occurred during the execution, it is possible to rerun the coputation until all coponents happen to operate on values that elicit no faults. Understanding the accuracy consequences of these perturbations can ake it possible to eploy this approach successfully. The scope of traditional progra transforations has been largely confined to standard copiler optiizations. As the above exaples illustrate, appropriately abitious accuracy-aware transforations that exploit the opportunity to anipulate accuracy within specified bounds can draatically increase the ipact and relevance of the field of progra analysis and transforation..5 Contributions This paper akes the following contributions: Model of Coputation: We present a odel of coputation for approxiate coputations. This odel supports arbitrary copositions of individual function nodes into coputation nodes and coputation nodes and reduction nodes into coputation trees. This odel exposes enough coputational structure to enable approxiate optiization via our two transforations. Accuracy-Aware Transforations: We consider two classes of accuracy-aware transforations: function substitutions and reduction sapling. Together, these transforations induce a space of transfored progras that provide different cobinations of accuracy and resource consuption. Tradeoff Curves: It shows how to use linear prograing, dynaic prograing, and a special bi-diensional discretization technique to obtain a ( + )-approxiation to the underlying optial accuracy-resource consuption tradeoff curve available via the accuracy-aware transforations. If the progra contains no reduction nodes, the tradeoff curve is exact. Optiization Algorith: It presents an optiization algorith that uses the tradeoff curve to produce randoized progras that satisfy specified probabilistic accuracy and resource consuption constraints. In coparison with approaches that attept to deliver a deterinistic progra, randoization en- The last author would like to thank Pat Lincoln for an interesting discussion on this topic. average Output Figure : A nuerical integration progra. ables our optiization algorith to ) deliver progras with better cobinations of accuracy and resource consuption, and 2) avoid a variety of intractability issues. Accuracy Bounds: We show how to obtain statically guaranteed probabilistic accuracy bounds for a general class of approxiate coputations. The only previous static accuracy bounds for accuracy-aware transforations exploited the structure present in a set of coputational patterns [6, 22, 23]. 2. Exaple We next present an exaple coputation that nuerically integrates a univariate function f(x) over a fixed interval [a, b]. The coputation divides [a, b] into n equal-sized subintervals, each of length x = b a x. Let x =(x,...xn), where xi = a + i. n 2 The value of the nuerical integral I is equal to n I = x f(x n i)= (b a) f(x i). n i= Say, for instance, f(x) =x sin log(x) is the function that we want to integrate and [a, b] =[, ]. Our Model of Coputation. As illustrated in Figure, in our odel of coputation, we have n input edges that carry the values of x i s into the coputation and an additional edge that carries the value of b a. For each x i, a coputation node calculates the value of (b a) f(x i). The output edges of these nodes are connected to a reduction node that coputes the average of these values (we call such a node an averaging node), as the final integral I. Progra Transforations. The above nuerical integration progra presents ultiple opportunities to trade end-to-end accuracy of the result I in return for increased perforance. Specifically, we identify the following two transforations that ay iprove the perforance: Substitution. It is possible to substitute the original ipleentations of the sin( ) and log( ) functions that coprise f(x) with alternate ipleentations that ay copute a less accurate output in less tie. Sapling. It is possible to discard soe of the n inputs of the averaging node (and the coputations that produce these inputs) by taking a rando saple of s n inputs (here we call s the reduction factor). Roughly speaking, this transforation introduces an error proportional to s, but decreases the running tie of the progra proportionally to s. n Tradeoff Space. In this nuerical integration proble, a progra configuration specifies which ipleentation to pick for each of the functions sin( ), log( ), and (in principle, although we do not i=

4 Configuration. Weight x log,0 x log, x log,2 x sin,0 x sin, x sin,2 s/n Error Speedup C % C % Table : The ( + )-optial randoized progra configuration for =0.05 and =0.0. do so in this exaple). The configuration also specifies the reduction factor s for the averaging node. If we assue that we have two alternate ipleentations of sin( ) and log( ), each progra configuration provides the following inforation: ) x u,i {0, } indicating whether we choose the i-th ipleentation of the function u {log, sin}, and i {0,, 2}, and 2) s indicating the reduction factor for the averaging node we choose. A randoized progra configuration is a probabilistic choice over progra configurations. Function Specifications. We ipose two basic requireents on the ipleentations of all functions that coprise f(x). The first requireent is that we have an error bound and tie coplexity specification for each ipleentation of each function. In this exaple we will use the following odel: the original ipleentation of log( ) executes in tie T log,0 with error E log,0 =0; the original ipleentation of sin( ) executes in tie T sin,0 with error E sin,0 =0. We have two alternate ipleentations of log( ) and sin( ), where the i-th ipleentation of a given function u {log( ), sin( )} runs in tie T u,i = i Tu,0 5 with error E log,i = i 0.008, and E sin,i = i (i {, 2}). The second requireent is that the error propagation of the entire coputation is bounded by a linear function. This requireent is satisfied if the functions that coprise the coputation are Lipschitz continuous 2. In our exaple, the function sin( ) is -Lipschitz continuous, since its derivative is bounded by. The function log(x) is also Lipschitz continuous, when x. Finally, the product function is Lipschitz continuous, when the two inputs are bounded. We reark here that this second requireent ensures that an error introduced by an approxiate ipleentation propagates to cause at ost a linear change in the final output. Finding the ( + )-Optial Progra Configuration. Given perforance and accuracy specifications for each function, we can run our optiization algorith to ( + )-approxiately calculate the optial accuracy-perforance tradeoff curve. For each point on the curve our algorith can also produce a randoized progra configuration that achieves this tradeoff. Given a target expected error bound, we use the tradeoff curve to find a randoized progra configuration that executes in expected tie τ. The ( + )-approxiation ensures that this expected running tie τ is at ost ( + ) ties the optial expected running tie for the expected error bound. In this exaple we use = 0.0 so that our optiized progra will produce a.0-approxiation. In addition, we define: the nuber of inputs n =0000, the overall expected error tolerance =0.05, and the running ties T sin,0 =0.08 µs and T log,0 =0.07 µs. For this exaple our optiization algorith identifies the point (,T 0/.7) on the tradeoff curve, where T 0 is the running tie of the original progra. This indicates that the optiized progra achieves a speedup of.7 over the original progra while keeping the expected value below the bound. Table presents the randoized progra configuration that achieves this tradeoff. This 2 A univariate function is α-lipschitz continuous if for any δ>0, it follows that f(x) f(x + δ) <αδ. As a special case, a differentiable function is Lipschitz continuous if f (x) α. This definition extends to ultivariate functions. randoized progra configuration consists of two progra configurations C and C 2. Each configuration has an associated weight which is the probability with which the randoized progra will execute that configuration. The table also presents the error and speedup that each configuration produces. The configuration C selects the less accurate approxiate versions of the functions log( ) and sin( ), and uses all inputs to the averaging reduction node. The configuration C 2, on the other hand, selects ore accurate approxiate versions of the functions log( ) and sin( ), and at the sae tie saples 4750 of the 0,000 original inputs. Note that individually neither C nor C 2 can achieve the desired tradeoff. The configuration C produces a ore accurate output but also executes significantly slower than the optial progra. The configuration C 2 executes uch faster than the optial progra, but with expected error greater than the desired bound. The randoized progra selects configuration C with probability 60.8% and C 2 with probability 39.2%. The randoized progra has expected error and expected running tie T 0/.7. We can use the sae tradeoff curve to obtain a randoized progra that iniizes the expected error subject to the execution tie constraint τ. In our exaple, if the tie bound τ = T 0/.7 the optiization algorith will produce the progra configuration fro Table with expected error =. More generally, our optiization algorith will produce an efficient representation of a probability distribution over progra configurations along with an efficient procedure to saple this distribution to obtain a progra configuration for each execution. 3. Model of Approxiate Coputation We next define the graph odel of coputation, including the error-propagation constraints for function nodes, and present the accuracy-aware substitution and sapling transforations. 3. Definitions Progras. In our odel of coputation, progras consist of a directed tree of coputation nodes and reduction nodes. Each edge in the tree transits a strea of values. The size of each edge indicates the nuber of transitted values. The ultiple values transitted along an edge can often be understood as a strea of nubers with the sae purpose for exaple, a illion pixels fro an iage or a thousand saples fro a sensor. Figure 2 presents an exaple of a progra under our definition. Reduction Nodes. Each reduction node has a single input edge and a single output edge. It reduces the size of its input by soe ultiplicative factor, which we call its reduction factor. A node with reduction factor S has an input edge of size R S and an output edge of size R. The node divides the R S inputs into blocks of size S. It produces R outputs by applying an S-to- aggregation function (such as in, ax, or ean) to each of the R blocks. For clarity of exposition and to avoid a proliferation of notation, we priarily focus on one specific type of reduction node, which we call an averaging node. An averaging node with reduction factor S will output the average of the first S values as the first output, the average of the next S values as the second output, and so on. The techniques that we present are quite general and apply to any reduction operation that can be approxiated well by sapling. Section 8 describes how to extend our algorith to work with other reduction operations.

5 edge of size coputation node reduction node Output (a) (b) average Output (c) Figure 2: An exaple progra in our odel of coputation. Figure 3: (a)(b): A closer look at two coputation nodes, and (c): Nuerical integration exaple, revisited. Coputation Nodes. A coputation node has potentially ultiple input edges and a single output edge. A coputation node of size R has: a single output edge of size R; a non-negative nuber of input edges, each of size either (which we call a control-input edge), or soe ultiple tr of R (which we call a data-input edge). Each control-input edge carries a single global constant. Datainput edges carry a strea of values which the coputation node partitions into R chunks. The coputation node executes R ties to produce R outputs, with each execution processing the value fro each control-input edge and a block of t values fro each data-input edge. The executions are independent. For exaple, consider a coputation node of size 0 with two input edges: one data-input edge of size 000, denoted by (a,a 2,...,a 000), and one control-input edge of size, denoted by b. Then, the function that outputs the vector 00 sin(a i,b), i= 200 i=0 sin(a i,b),..., 000 i=90 sin(a i,b) is a coputation node. We reark here that a reduction node is a special kind of coputation node. We treat coputation and reduction nodes separately because we optiize coputation nodes with substitution transforations and reduction nodes with sapling transforations (see Section 3.2). Inner Structure of Coputation Nodes. A coputation node can be further decoposed into one or ore function nodes, connected via a directed acyclic graph (DAG). Like coputation nodes, each function node has potentially ultiple input edges and a single output edge. The size of each input edge is either or a ultiple of the size of the output edge. The functions can be of arbitrary coplexity and can contain language constructs such as conditional stateents and loops. For exaple, the coputation node in Eq.() can be further decoposed as shown in Figure 3a. Although we require the coputation nodes and edges in each progra to for a tree, the function nodes and edges in each coputation node can for a DAG (see, for exaple, Figure 3b). In principle, any coputation node can be represented as a single function node, but its decoposition into ultiple function nodes allows for finer granularity and ore transforation choices when optiizing entire progra. () Exaple. Section 2 presented a nuerical integration progra exaple. Figure 3c presents this exaple in our odel of coputation (copare with Figure ). Note that the ultiplicity of coputation nodes in Figure corresponds to the edge sizes in Figure 3c. The log function node with input and output edges of size n runs n ties. Each run consues a single input and produces a single output. The function node with input edges of size n and runs n ties. Each execution produces as output the product of an x i with the coon value b a fro the control edge. 3.2 Transforations In a progra configuration, we specify the following two kinds of transforations at function and reduction nodes. Substitution. For each function node f u of size R, we have a polynoial nuber of ipleentations f u,,...,f u,k. The function runs R ties. We require each ipleentation to have the following properties: each run of f u,i is in expected tie T u,i, giving a total expected running tie of R T u,i, and each run of f u,i produces an expected absolute additive error of at ost E u,i, i.e., x, E[ f u(x) f u,i(x) ] E u,i. (The expectation is over the the randoness of f u,i and f u.) We assue that all (T u,i,e u,i) pairs are known in advance (they are constants or depend only on control inputs). Sapling. For each reduction node r with reduction factor S r, we can decrease this factor S r to a saller factor s r {,...,S r} at the expense of introducing soe additive sapling error E r(s r). For exaple, for an averaging node, instead of averaging all S r inputs, we would randoly select s r inputs (without replaceent) and output the average of the chosen saples. For convenience, we denote the sapling rate of node r as η r = sr S r. If the output edge is of size R, the coputation selects s r R inputs, instead of all S r R inputs. The values for the reduction node inputs which are not selected need not be coputed. Discarding the coputations that would otherwise produce these discarded inputs produces a speed-up factor of η r = sr S r for all nodes above r in the coputation tree. The following lea provides a bound on the sapling error S E r(s r)=(b A) r s r s r(s r ) for an averaging node. The proof is available in the full version of the paper.

6 Lea 3.. Given nubers x,x 2,...,x [A, B], randoly sapling s of the nubers x i,...,x is (without replaceent) and coputing the saple average gives an approxiation to x + +x with the following expected error guarantee: x i + + x is x + + x E i,...,i s s s (B A) s( ). 3.3 Error Propagation The errors that the transforations induce in one part of the coputation propagate through the rest of the coputation and can be aplified or attenuated in the process. We next provide constraints on the for of functions that characterize this error propagation. These constraints hold for all functions in our odel of coputation (regardless of whether they have alternate ipleentations or not). We assue that for each function node f u(x,...,x ) with inputs, if each input x j is replaced by soe approxiate input ˆx j such that E[ x j ˆx j ] δ j, the propagation error is bounded by a linear error propagation function E u: E f u(x,...,x ) f u(ˆx,...,ˆx ) E u δ,...,δ. (2) We assue that all of the error propagation functions E u for the functions f u are known a priori: E f u(x,...,x ) f u(ˆx,...,ˆx ) α jδ j. (3) This condition is satisfied if all functions f u are Lipschitzcontinuous with paraeters α. Furtherore, if f u(x,...,x ) is differentiable, we can let α i = ax x f u(x,...,x ). x i If fu is itself probabilistic, we can take the expected value of such α i s. Substitute Ipleentations. For functions with ultiple ipleentations, the overall error when we choose the i-th ipleentation f u,i is bounded by an error propagation function E u and the local error induced by the i-th ipleentation E u,i (defined in the previous subsection): E f u(x,...,x ) f u,i(ˆx,...,ˆx ) E u δ,...,δ +Eu,i. (4) This bound follows iediately fro the triangle inequality. We reark here that the randoness for the expectation in Eq.(4) coes fro ) the randoness of its input ˆx,...,ˆx (caused by errors fro previous parts of the coputation) and 2) rando choices in the possibly probabilistic ipleentation f u,i. These two sources of randoness are utually independent. Averaging Reduction Node. The averaging function is a Lipschitzcontinuous function with all α i =, so in addition to Lea 3. we have: Corollary 3.2. Consider an averaging node that selects s rando saples fro its inputs, where each input ˆx j has bounded error E[ ˆx j x j ] δ j. Then: ˆx i + +ˆx is x + + x E i,...,i s,ˆx,...,ˆx s s δ j +(B A) s( ). j= If all input values have the sae error bound E[ ˆx j x j ] δ, then j= δj = δ. j 4. Approxiation Questions We focus on the following question: Question. Given a progra P in our odel of coputation and using randoized configurations, what is the optial error-tie tradeoff curve that our approxiate coputations induce? Here the tie and error refer to the expected running tie and error of the progra. We say that the expected error of progra P is, if for all input x, E[ P (x) P (x) ]. The error-tie tradeoff curve is a pair of functions (E( ),T( )), such that E(t) is the optial expected error of the progra if the expected running tie is no ore than t, and T (e) is the optial expected running tie of the progra if the expected error is no ore than e. The substitution and sapling transforations give rise to an exponentially large space of possible progra configurations. We optiize over arbitrary probability distributions of such configurations. A naive optiization algorith would therefore run in tie at least exponential in the size of the progra. We present an algorith that approxiately solves Question within a factor of (+) in tie: 3 ) polynoial in the size of the coputation graph, and 2) polynoial in. The algorith uses linear prograing and a novel technique called bi-diensional discretization, which we present in Section 5. A successful answer to the above question leads directly to the following additional consequences: Consequence : Optiizing Tie Subject to Error Question 2. Given a progra P in our odel, and an overall error tolerance, what is the optial (possibly randoized) progra P available within our space of transforations, with expected error no ore than? We can answer this question approxiately using the optiization algorith for Question. This algorith will produce a randoized progra with expected running tie no ore than ( + ) ties the optial running tie and expected error no ore than. The algorith can also answer the syetric question to find a ( + )-approxiation of the optial progra that iniizes the expected error given a bound on the expected running tie. Consequence 2: Fro Error to Variance We say that the overall variance (i.e., expected squared error) of a randoized progra P is 2, if for all input x, E[ P (x) P (x) 2 ] 2. A variant of our algorith for Question ( + )- approxiately answers the following questions: Question 3. Given a progra P in our odel of coputation, what is the optial error-variance tradeoff curve that our approxiate coputations induce? Question 4. Given a progra P in our odel, and an overall variance tolerance 2, what is the optial (possibly randoized) progra P available within our space of transforations, with variance no ore than 2? Section 7 presents the algorith for these questions. Consequence 3: Probabilities of Large Errors A bound on the expected error or variance also provides a bound on the probability of observing large errors. In particular, an execution 3 We say that we approxiately obtain the curve within a factor of (+), if for any given running tie t, the difference between the optial error E(t) and our Ê(t) is at ost E(t), and siilarly for the tie function T (e). Our algorith is a fully polynoial-tie approxiation schee (FPTAS). Section 5 presents a ore precise definition in which the error function Ê(t) is also subject to an additive error of soe arbitrarily sall constant.

7 of a progra with expected error will produce an absolute error greater than t with probability at ost (this bound follows t iediately fro Markov s inequality). Siilarly, an execution of a progra with variance 2 will produce an absolute error greater than t with probability at ost. t 2 5. Optiization Algorith for Question We next describe a recursive, dynaic prograing optiization algorith which exploits the tree structure of the progra. To copute the approxiate optial tradeoff curve for the entire progra, the algorith coputes and cobines the approxiate optial tradeoff curves for the subprogras. We stage the presentation as follows: Coputation Nodes Only: In Section 5., we show how to copute the optial tradeoff curve exactly when the coputation consists only of coputation nodes and has no reduction nodes. We reduce the optiization proble to a linear progra (which is efficiently solvable). Bi-diensional Discretization: In Section 5.2, we introduce our bi-diensional discretization technique, which constructs a piecewise-linear discretization of any tradeoff curve (E( ),T( )), such that ) there are only O( ) segents on the discretized curve, and 2) at the sae tie the discretization approxiates (E( ),T( )) to within a ultiplicative factor of ( + ). A Single Reduction Node: In Section 5.3, we show how to copute the approxiate tradeoff curve when the given progra consists of coputation nodes that produce the input for a single reduction node r (see Figure 6). We first work with the curve when the reduction factor s at the reduction node r is constrained to be a single integer value. Given an expected error tolerance e for the entire coputation, each randoized configuration in the optial randoized progra allocates part of the expected error E r(s) to the sapling transforation on the reduction node and the reaining expected error e sub = e E r(s) to the substitution transforations on the subprogra with only coputation nodes. One inefficient way to find the optial randoized configuration for a given expected error e is to siply search all possible integer values of s to find the optial allocation that iniizes the running tie. This approach is inefficient because the nuber of choices of s ay be large. We therefore discretize the tradeoff curve for the input to the reduction node into a sall set of linear pieces. It is straightforward to copute the optial integer value of s within each linear piece. In this way we obtain an approxiate optial tradeoff curve for the output of the reduction node when the reduction factor s is constrained to be a single integer. We next use this curve to derive an approxiate optial tradeoff curve when the reduction factor s can be deterined by a probabilistic choice aong ultiple integer values. Ideally, we would use the convex envelope of the original curve to obtain this new curve. But because the original curve has an infinite nuber of points, it is infeasible to work with this convex envelope directly. We therefore perfor another discretization to obtain a piecewise-linear curve that we can represent with a sall nuber of points. We work with the convex envelope of this new discretized curve to obtain the final approxiation to the optial tradeoff curve for the output of the reduction node r. This curve incorporates the effect of both the substitution transforations on the coputation nodes and the sapling transforation on the reduction node Output Figure 4: Exaple to illustrate the coputation of tie and error. The Final Dynaic Prograing Algorith: In Section 5.4, we provide an algorith that coputes an approxiate errortie tradeoff curve for an arbitrary progra in our odel of coputation. Each step uses the algorith fro Section 5.3 to copute the approxiate discretized tradeoff curve for a subtree rooted at a topost reduction node (this subtree includes the coputation nodes that produce the input to the reduction node). It then uses this tradeoff curve to replace this subtree with a single function node. It then recursively applies the algorith to the new progra, terinating when it coputes the approxiate discretized tradeoff curve for the output of the final node in the progra. 5. Stage : Coputation Nodes Only We start with a base case in which the progra consists only of coputation nodes with no reduction nodes. We show how to use linear prograing to copute the optial error-tie tradeoff curve for this case. Variables x. For each function node f u, the variable x u,i [0, ] indicates the probability of running the i-th ipleentation f u,i. We also have the constraint that i xu,i =. Running Tie TIME(x). Since there are no reduction nodes in the progra, each function node f u will run R u ties (recall that R u is the nuber of values carried on the output edge of f u). The running tie is siply the weighted su of the running ties of the function nodes (where each weight is the probability of selecting each corresponding ipleentation): TIME(x) = (x u,i T u,i R u). (5) u i Here the suation u is over all function nodes and i is over all ipleentations of f u. Total Error ERROR(x). The total error of the progra also adits a linear for. For each function node f u, the i-th ipleentation f u,i incurs a local error E u,i on each output value. By the linear error propagation assuption, this E u,i is aplified by a constant factor β u which depends on the progra structure. It is possible to copute the β u with a traversal of the progra backward against the flow of values. Consider, for exaple, β for function node f in the progra in Figure 4. Let α 2 be the linear error propagation factor for the univariate function f 2( ). The function f 3(,, ) is trivariate with 3 propagation factors (α 3,,α 3,2,α 3,3). We siilarly define (α 4,,...,α 4,4) for the quadvariate function f 4, and (α 5,,α 5,2,α 5,3) for f 5. Any error in an output value of f will be aplified by a factor β : β = α 2 (α 4, +α 4,2 +α 4,3 )+(α 3, +α 3,2 +α 3,3 )α 4,4 (α5, +α 5,2 ).

8 The total expected error of the progra is: ERROR(x) = u (x u,i E u,i β u). (6) i Optiization Given a fixed overall error tolerance, the following linear progra defines the iniu expected running tie: Variables: x Constraints: 0 x u,i, u, i i xu,i = ERROR(x) u Miniize: TIME(x) By swapping the roles of ERROR(x) and TIME(x), it is possible to obtain a linear progra that defines the iniu expected error tolerance for a given expected axiu running tie. (7) Figure 5: An exaple of bi-diensional discretization. 5.2 Error-Tie Tradeoff Curves In the previous section, we use linear prograing to obtain the optial error-tie tradeoff curve. Since there are an infinite nuber of points on this curve, we define the curve in ters of functions. To avoid unnecessary coplication when doing inversions, we define the curve using two related functions E( ) and T ( ): Definition 5.. The (error-tie) tradeoff curve of a progra is a pair of functions (E( ),T( )) such that E(t) is the optial expected error of the progra if the expected running tie is no ore than t and T (e) is the optial expected running tie of the progra if the expected error is no ore than e. We say that a tradeoff curve is efficiently coputable if both functions E and T are efficiently coputable. 4 The following property is iportant to keep in ind: Lea 5.2. In a tradeoff curve (E,T), both E and T are nonincreasing convex functions. Proof. T is always non-increasing because when the allowed error increases the iniu running tie does not increase, and siilarly for E. We prove convexity by contradiction: assue αe(t )+( α)e(t 2) <E(αt +( α)t 2) for soe α (0, ). Then choose the optial progra for E(t ) with probability α, and the optial progra for E(t 2) with probability α. The result is a new progra P in our probabilistic transforation space. This new progra P has an expected running tie less than the optial running tie E(αt +( α)t 2), contradicting the optiality of E. A siilar proof establishes the convexity of T. We reark here that, given a running tie t, one can copute E and be sure that (E(t),t) is on the curve; but one cannot write down all of the infinite nuber of points on the curve concisely. We therefore introduce a bi-diensional discretization technique that allows us to approxiate (E,T) within a factor of ( + ). This technique uses a piecewise linear function with roughly O( ) segents to approxiate the curve. Our bi-diensional discretization technique (see Figure 5) approxiates E in the bounded range [0,E ax], where E ax is an upper bound on the expected error, and approxiates T in the bounded range [T (E ax),t(0)]. We assue that we are given the axiu acceptable error E ax (for exaple, by a user of the progra). It is also possible to conservatively copute an E ax by analyzing the least-accurate possible execution of the progra. 4 In the reainder of the paper we refer to the function E( ) siply as E and to the function T ( ) as T. Definition 5.3. Given a tradeoff curve (E,T) where E and T are both non-increasing, along with constants (0, ) and E > 0, we define the (, E)-discretization curve of (E,T) to be the piecewise-linear curve defined by the following set of endpoints (see Figure 5): the two black points (0,T(0)), (E ax,t(e ax)), the red points (e i,t(e i)) where e i = E(+) i for soe i 0 and E( + ) i <E ax, and the blue points (E(t i),t i) where t i = T (E ax)( + ) i for soe i and T (E ax)( + ) i <T(0). Note that there is soe asyetry in the discretization of the two axes. For the vertical tie axis we know that the iniu running tie of a progra is T (E ax) > 0, which is always greater than zero since a progra always runs in a positive aount of tie. However, we discretize the horizontal error axis proportional to powers of ( + ) i for values above E. This is because the error of a progra can indeed reach zero, and we cannot discretize forever. 5 The following clai follows iediately fro the definition: Clai 5.4. If the original curve (E,T) is non-increasing and convex, the discretized curve (Ê, ˆT ) is also non-increasing and convex Accuracy of bi-diensional discretization We next define notation for the bi-diensional tradeoff curve discretization: Definition 5.5. A curve (Ê, ˆT ) is an (, E)-approxiation to (E,T) if for any error 0 e E ax, 0 ˆT (e) T (e) T (e), and for any running tie T (E ax) t T (0), 0 Ê(t) E(t) E(t)+E. We say that such an approxiation has a ultiplicative error of and an additive error of E. Lea 5.6. If (Ê, ˆT ) is an (, E)-discretization of (E,T), then it is an (, E)-approxiation of (E,T). Proof Sketch. The idea of the proof is that, since we have discretized the vertical tie axis in an exponential anner, if we copute ˆT (e) for any value e, the result does not differ fro T (e) by 5 If instead we know that the iniu expected error is greater than zero (i.e., E(T ax) > 0) for soe axiu possible running tie T ax, then we can define E = E(T ax) just like our horizontal axis.

9 Output is exact curve for single choice of is exact curve for probabilistic choice of is curve for each value on this edge discretize it to using paraeter solve univariate optiization proble to get discretize it to using paraeter copute its convex envelope Figure 6: Algorith for Stage 2. Our clais: is -appx. to ; is -appx. to ; is -appx. to. ore than a factor of ( + ). Siilarly, since we have discretized the horizontal axis in an exponential anner, if we copute Ê(t) for any value t, the result does not differ by ore than a factor of ( + ), except when E(t) is saller than E (when we stop the discretization). But even in that case the value Ê(t) E(t) reains saller than E. Because every point on the new piecewise-linear curve (Ê, ˆT ) is a linear cobination of soe points on the original curve (E,T), 0 ˆT (e) T (e) and 0 Ê(t) E(t), Because (E,T) is convex (recall Lea 5.2), the approxiation will always lie above the original curve Coplexity of bi-diensional discretization The nuber of segents that the approxiate tradeoff curve has in an (, E)-discretization is at ost n p def =2+ Eax log E log( + ) + log T (0) T (E ax) log( + ) O log Eax +log +logt ax +log, (8) E T in where T in is a lower bound on the expected execution tie and T ax is an upper bound on the expected execution tie. Our discretization algorith only needs to know E ax in advance, while T ax and T in are values that we will need later in the coplexity analysis Discretization on an approxiate curve The above analysis does not rely on the fact that the original tradeoff curve (E,T) is exact. In fact, if the original curve (E,T) is only an (, E)-approxiation to the exact error-tie tradeoff curve, and if (Ê, ˆT ) is the (, E)-discretization of (E,T), then one can verify by the triangle inequality that (Ê, ˆT ) is a piecewise linear curve that is an ( + +, E + E) approxiation of the exact error-tie tradeoff curve. 5.3 Stage 2: A Single Reduction Node We now consider a progra with exactly one reduction node r, with original reduction factor S, at the end of the coputation. The exaple in Figure 3c is such a progra. We describe our optiization algorith for this case step by step as illustrated in Figure 6. We first define the error-tie tradeoff curve for the subprogra without the reduction node r to be (E sub,t sub ) (Section 5. describes how to copute this curve; Lea 5.2 ensures that it is non-increasing and convex). In other words, for every input value to the reduction node r, if the allowed running tie for coputing this value is t, then the optial expected error is E sub (t) and siilarly for T sub (e). Note that when coputing (E sub,t sub ) as described in Section 5., the size of the output edge R i for each node i ust be divided by S, as the curve (E sub,t sub ) characterizes each single input value to the reduction node r. If at reduction node r we choose an actual reduction factor s {, 2,...,S}, the total running tie and error of this entire progra is: 6 TIME = T sub s (9) ERROR = E sub + E r(s). This is because, to obtain s values on the input to r, we need to run the subprogra s ties with a total tie T sub s; and by Corollary 3.2, the total error of the output of an averaging reduction node is siply the su of its input error E sub, and a local error E r(s) incurred by the sapling. 7 Let (E,T ) be the exact tradeoff curve (E,T ) of the entire progra, assuing that we can choose only a single value of s.we start by describing how to copute this (E,T ) approxiately Approxiating (E,T ): single choice of s By definition, we can write (E,T ) in ters of the following two optiization probles: T (e) = Tsub (e sub ) s in s {,...,S} e sub +E r(s)=e and E (t) = in s {,...,S} t sub s=t Esub (t sub )+E r(s), where the first optiization is over variables s and e sub, and the second optiization is over variables s and t sub. We ephasize here that this curve (E,T ) is by definition non-increasing (because (E sub,t sub ) is non-increasing), but ay not be convex. Because these optiization probles ay not be convex, they ay be difficult to solve in general. But thanks to the piecewiselinear discretization defined in Section 5.2, we can approxiately solve these optiization probles efficiently. Specifically, we produce a bi-diensional discretization (Êsub, ˆT sub ) that (, E)- approxiates (E sub,t sub ) (as illustrated in Figure 6). We then solve the following two optiization probles: T (e) = in s {,...,S} e sub +E r(s)=e ˆTsub (e sub ) s and E(t) = in Êsub (t sub )+E r(s). (0) s {,...,S} t sub s=t We reark here that E and T are both non-increasing since Êsub and ˆT sub are non-increasing using Clai 5.4. Each of these two probles can be solved by ) coputing the optial value within each linear segent defined by (Êsub,k, ˆT sub,k ) and (Êsub,k+, ˆT sub,k+ ), and 2) returning the sallest optial value across all linear segents. Suppose that we are coputing T (e) given an error e. In the linear piece of ˆT sub = ae sub + b (here a and b are the slope and intercept of the linear segent), we have e sub = e E r(s). The objective that we are iniizing therefore becoes univariate with respect to s: ˆT sub s =(ae sub + b) s =(a(e E r(s)) + b) s. () 6 Here we have ignored the running tie for the sapling procedure in the reduction node, as it is often negligible in coparison to other coputations in the progra. It is possible to add this sapling tie to the forula for TIME in a straightforward anner. 7 We extend this analysis to other types of reduction nodes in Section 8.

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Homework 3 Solutions CSE 101 Summer 2017

Homework 3 Solutions CSE 101 Summer 2017 Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Introduction to Discrete Optimization

Introduction to Discrete Optimization Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1. Notes on Coplexity Theory Last updated: October, 2005 Jonathan Katz Handout 7 1 More on Randoized Coplexity Classes Reinder: so far we have seen RP,coRP, and BPP. We introduce two ore tie-bounded randoized

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Tight Complexity Bounds for Optimizing Composite Objectives

Tight Complexity Bounds for Optimizing Composite Objectives Tight Coplexity Bounds for Optiizing Coposite Objectives Blake Woodworth Toyota Technological Institute at Chicago Chicago, IL, 60637 blake@ttic.edu Nathan Srebro Toyota Technological Institute at Chicago

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Generalized Queries on Probabilistic Context-Free Grammars

Generalized Queries on Probabilistic Context-Free Grammars IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998 1 Generalized Queries on Probabilistic Context-Free Graars David V. Pynadath and Michael P. Wellan Abstract

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors

New Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors New Slack-Monotonic Schedulability Analysis of Real-Tie Tasks on Multiprocessors Risat Mahud Pathan and Jan Jonsson Chalers University of Technology SE-41 96, Göteborg, Sweden {risat, janjo}@chalers.se

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

1 Identical Parallel Machines

1 Identical Parallel Machines FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

When Short Runs Beat Long Runs

When Short Runs Beat Long Runs When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

time time δ jobs jobs

time time δ jobs jobs Approxiating Total Flow Tie on Parallel Machines Stefano Leonardi Danny Raz y Abstract We consider the proble of optiizing the total ow tie of a strea of jobs that are released over tie in a ultiprocessor

More information

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual 6 IEEE Congress on Evolutionary Coputation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-1, 6 On the Analysis of the Quantu-inspired Evolutionary Algorith with a Single Individual

More information

General Properties of Radiation Detectors Supplements

General Properties of Radiation Detectors Supplements Phys. 649: Nuclear Techniques Physics Departent Yarouk University Chapter 4: General Properties of Radiation Detectors Suppleents Dr. Nidal M. Ershaidat Overview Phys. 649: Nuclear Techniques Physics Departent

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Statistical Logic Cell Delay Analysis Using a Current-based Model

Statistical Logic Cell Delay Analysis Using a Current-based Model Statistical Logic Cell Delay Analysis Using a Current-based Model Hanif Fatei Shahin Nazarian Massoud Pedra Dept. of EE-Systes, University of Southern California, Los Angeles, CA 90089 {fatei, shahin,

More information

Now multiply the left-hand-side by ω and the right-hand side by dδ/dt (recall ω= dδ/dt) to get:

Now multiply the left-hand-side by ω and the right-hand side by dδ/dt (recall ω= dδ/dt) to get: Equal Area Criterion.0 Developent of equal area criterion As in previous notes, all powers are in per-unit. I want to show you the equal area criterion a little differently than the book does it. Let s

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Analysis of Impulsive Natural Phenomena through Finite Difference Methods A MATLAB Computational Project-Based Learning

Analysis of Impulsive Natural Phenomena through Finite Difference Methods A MATLAB Computational Project-Based Learning Analysis of Ipulsive Natural Phenoena through Finite Difference Methods A MATLAB Coputational Project-Based Learning Nicholas Kuia, Christopher Chariah, Mechatronics Engineering, Vaughn College of Aeronautics

More information

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction TWO-STGE SMPLE DESIGN WITH SMLL CLUSTERS Robert G. Clark and David G. Steel School of Matheatics and pplied Statistics, University of Wollongong, NSW 5 ustralia. (robert.clark@abs.gov.au) Key Words: saple

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

Improved multiprocessor global schedulability analysis

Improved multiprocessor global schedulability analysis Iproved ultiprocessor global schedulability analysis Sanjoy Baruah The University of North Carolina at Chapel Hill Vincenzo Bonifaci Max-Planck Institut für Inforatik Sebastian Stiller Technische Universität

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Decision-Theoretic Approach to Maximizing Observation of Multiple Targets in Multi-Camera Surveillance

Decision-Theoretic Approach to Maximizing Observation of Multiple Targets in Multi-Camera Surveillance Decision-Theoretic Approach to Maxiizing Observation of Multiple Targets in Multi-Caera Surveillance Prabhu Natarajan, Trong Nghia Hoang, Kian Hsiang Low, and Mohan Kankanhalli Departent of Coputer Science,

More information

Solving initial value problems by residual power series method

Solving initial value problems by residual power series method Theoretical Matheatics & Applications, vol.3, no.1, 13, 199-1 ISSN: 179-9687 (print), 179-979 (online) Scienpress Ltd, 13 Solving initial value probles by residual power series ethod Mohaed H. Al-Sadi

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Genetic Algorithm Search for Stent Design Improvements

Genetic Algorithm Search for Stent Design Improvements Genetic Algorith Search for Stent Design Iproveents K. Tesch, M.A. Atherton & M.W. Collins, South Bank University, London, UK Abstract This paper presents an optiisation process for finding iproved stent

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Short Papers. Test Data Compression and Decompression Based on Internal Scan Chains and Golomb Coding

Short Papers. Test Data Compression and Decompression Based on Internal Scan Chains and Golomb Coding IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 1, NO. 6, JUNE 00 715 Short Papers Test Data Copression and Decopression Based on Internal Scan Chains and Golob Coding

More information