Optimal Embedding of Functions for In-Network Computation: Complexity Analysis and Algorithms

Size: px
Start display at page:

Download "Optimal Embedding of Functions for In-Network Computation: Complexity Analysis and Algorithms"

Transcription

1 1 Opimal Embedding of Funcions for In-Nework Compuaion: Complexiy Analysis and Algorihms Pooja Vyavahare, Nuan Limaye, and D. Manjunah, Senior Member, IEEE arxiv: v3 [cs.dc] 14 Jul 2015 Absrac We consider opimal disribued compuaion of a given funcion of disribued daa. The inpu (daa) nodes and he sink node ha receives he funcion form a conneced nework ha is described by an undireced weighed nework graph. The algorihm o compue he given funcion is described by a weighed direced acyclic graph and is called he compuaion graph. An embedding defines he compuaion communicaion sequence ha obains he funcion a he sink. Two kinds of opimal embeddings are sough, he embedding ha (1) minimizes delay in obaining funcion a sink, and (2) minimizes cos of one insance of compuaion of funcion. This absracion is moivaed by hree applicaions in-nework compuaion over sensor neworks, operaor placemen in disribued daabases, and module placemen in disribued compuing. We firs show ha obaining minimum-delay and minimumcos embeddings are boh NP-complee problems and ha cos minimizaion is acually MAX SNP-hard. Nex, we consider specific forms of he compuaion graph for which polynomial ime soluions are possible. When he compuaion graph is a ree, a polynomial ime algorihm o obain he minimum delay embedding is described. Nex, for he case when he funcion is described by a layered graph we describe an algorihm ha obains he minimum cos embedding in polynomial ime. This algorihm can also be used o obain an approximaion for delay minimizaion. We hen consider bounded reewidh compuaion graphs and give an algorihm o obain he minimum cos embedding in polynomial ime. Index Terms In-nework funcion compuaion; operaor placemen; graph embedding; A. Background I. INTRODUCTION Efficien compuing of funcions of disribued daa is of ineres in many applicaions, mos recenly in sensor neworks [1] and in programming models for disribued processing of large daa ses, e.g., MapReduce [2] and Dryad [3]. Consider a ypical sensor nework scenario where sensor nodes measure and sore environmen variables and form a disribued daabase. These nodes also have compuing and communicaion capabiliies and can form a conneced nework. A sink node, also called erminal or receiver, is ineresed in one or more funcions of his disribued daa raher han in he raw daa iself. Convenional examples for such funcions are maximum, minimum, mean, pariy, and hisogram [1]. More sophisicaed funcions are also easily moivaed, e.g., spaial Pooja Vyavahare and D. Manjunah are wih he Dep. of Elecl. Engg., and Nuan Limaye is wih he Dep. of Comp. Sc. & Engg. of IIT Bombay. addresses are {vpooja,dmanju}@ee.iib.ac.in and nuan@cse.iib.ac.in This work was suppored in par by DST projecs SR/SS/EECE/0139/2011, ITRA/15(64)/Mobile/USEAADWN/01, and was carried ou in he Bhari Cenre for Communicaion a IIT Bombay. and emporal correlaions, specral characerisics of he daa (ha may be obained by performing FFT on he daa), and filering operaions on he raw daa in sensor neworks. A naive approach o obain he required funcion(s) of he disribued daa a he sink would be o collec he raw daa a he sink and have i perform he compuaion. Alernaively, since he nodes have compuaion capabiliy, i could possibly be more efficien o push he compuaion ino he nework, i.e., use a disribued compuaion scheme over he communicaion nework. Our ineres is in he laer approach efficien innework compuaion of he required funcion. Several measures of efficiency of compuaion may be defined. Toal energy expended in obaining one sample of he funcion is a possible measure. Delay from he ime a which he daa is available a he sources o he ime a which he funcion value is available a he sink is a second possible measure. If he daa a each of he sources were o be a sream, hen he rae a which he funcion values are available a he sink is a hird possible measure. In his paper we consider only he firs wo measures above cos of compuaion and delay in compuaion. Thus our focus is on algorihms ha find an compuaion and communicaion (rouing) sequence o compue a arge funcion on a given nework ha minimize he delay or he cos. The arge funcions are assumed o belong o he class of funcions ha are compued using a scheme ha can be represened as direced acyclic graphs (DAGs). The following example illusraes our inen. Consider a nework of N nodes, K of which collec measuremen daa from heir environmen. Le x k () be daa sample available a node k (k = 1,..., K) a ime and le r() = 1 K 1 K 1 i=1 x i()x i+1 () be he funcion of ineres. r() can be compued using he schema of Fig. 1a. Each edge in his direced graph represens an inermediae value in he compuaion of r() and each node corresponds o an operaion ha is o be performed on is inpus. The communicaion nework over which r() is o be compued is shown in Fig. 1b as a weighed undireced graph. In he example all he edges have uni weigh. Two possible compuaion and communicaion schemes are shown in Figs. 1c,d. We see ha he scheme in Fig. 1c has a lower cos han ha in Fig. 1d. Several flavors of in-nework funcion compuaion exis in he lieraure. A randomly deployed mulihop wireless nework of noise free links is considered in [1], [4] [6]. They deermine asympoic achievable raes a which differen symmeric funcions like minimum, maximum and ype vecor may be compued. An idenical objecive is addressed in [7] [11] for single hop neworks wih noisy links and in [12], [13] for mulihop wireless neworks wih noisy links. When

2 2 s 1 s 2 s 3 + (a) s 1 s 2 (c) s 3 s 1 s 2 (b) s 1 s 2 Fig. 1: Compuaion of funcion r() = x1x2+x2x3 2. (a) A schema o compue r() (b) Communicaion Nework (c) Implemenaion 1 (d) Implemenaion 2 he nodes are in a n n grid and communicae over wireline or wireless links, [14] obains he ime and number of ransmissions required o compue a funcion. Randomized gossip algorihms [15], [16], where a random sequence of node-pairs exchange daa and perform a specific compuaion is used in [17] [19] for funcion compuaion. The ineres is in ime for all nodes o converge o he specified funcion. Anoher sream of work considers compuing of specific funcions using nework coding and designs he communicaion neworks o maximize he compuaion rae [20] [22]. None of he above consider minimum cos funcion compuaion. Furher, o he bes of our knowledge, only [23] considers compuaion of arbirary funcions in finie size neworks; he ineres here is on maximizing he rae of compuaion. Raher han arihmeic operaions o perform a funcion compuaion, he nodes in Fig. 1a could correspond o operaions required o execue a daabase query and he direced edges would represen he flow of he resuls of he operaions. In his case he graph of Fig. 1a is called a query graph. The inpu daa o his query graph could be from a disribued daabase, which in urn could be a sensor nework; in his case he graph in Fig. 1b would represen he inerconnecion among he unis of he disribued daabase and some oher nodes ha may be available for compuaion. In his conex performing an efficien query requires ha he operaor placemen be efficien. Efficien operaor placemen is addressed in [24] [29] all of which assume ha he query graph is a ree. While [24], [27], [29] develop heurisics based algorihms for efficien operaor placemen when he query graph is a ree, algorihms wih formal analyses are available in [25], [26], [28]. Alhough [30] considers a non-ree query graph, only heurisic algorihms are provided. (d) s 3 s 3 Going back in he lieraure, we see ha he module placemen problem for disribued processing from he 1980s has a flavor similar o in-nework compuaion and operaor placemen problems; see e.g., [31] [35]. In his case he nodes of Fig. 1a correspond o he modules of a program and a direced edge (u, v) implies ha module u calls module v during execuion. Furher, he graph of Fig. 1a is called a call graph. The nodes of Fig. 1b represen he processors on which he modules of he program need o be execued and he edges represen he iner processor links. The cos srucure for his problem is more complex. The execuion cos is specified for each module-processor pair and here is also an iner processor communicaion cos over an edge ha in urn depends on he modules placed a he ends of he edge. The objecive is o place he modules on he processors such ha he oal cos of execuion is minimized. The firs cenralized algorihm for opimal module placemen when he call graph is a direced ree was given in [32] and ha for k-rees was given in [35]. [31] showed ha he problem can be efficienly solved for a wo processor sysem by using nework flow algorihms and [33] uses a similar approach o develop heurisic algorihms for a general call graph. From he preceding discussion we see ha he objecives of, and hence he soluion echniques for, efficien in-nework compuaion, operaor placemen and module placemen all have a very similar heme embedding (a formal definiion is provided in Secion II) a graph represening a compuaion schema on a conneced weighed graph represening a nework of processors. Much of he lieraure on such problems assumes ha he graph describing he funcion is a ree. This is clearly very resricive because he compuaion schema for a very large class of useful compuable funcions canno be described by a ree and are more generally described by a DAG, e.g., fas Fourier ransform (FFT), soring, any polynomial funcion of inpu daa, and any funcion of Boolean daa. MapReduce, he popular cloud compuing paradigm, also has a DAG represenaion and is discussed in some deail in Example 3 in Secion V. B. Organizaion of he Paper Our ineres is in compuing arbirary funcions ha have a specific algorihmic represenaion and he communicaion nework is an arbirary nework and no a random wireless nework. (We reierae ha alhough we assume he in-nework compuaion scenario o anchor he discussion, our resuls are also direcly applicable o he operaor and module placemen scenarios.) Furher, we do no seek o specifically maximize he compuaion hroughpu; raher, our ineres is in minimizing he cos or delay in a one sho compuaion of he funcion. As we menioned before, wo measures of efficiency are used in his paper cos of compuaion and delay in compuaion. Our resuls on minimum cos embedding can be used o maximize compuaion hroughpu when used in he algorihms developed in [23]. Given an arbirary funcion via is DAG descripion and an arbirary nework over which his funcion is o be compued, our firs ineres is o analyze he complexiy of finding he

3 3 opimal compuaion and communicaion scheme ha will compue he funcion in he nework. While much of he lieraure claims ha his problem is NP-complee (for he case of minimizing cos), o he bes of our knowledge a formal proof is no available; he lieraure evenually leads o a ciaion o a privae communicaion in [32]. In Secion III we prove ha in general, boh he minimum cos and he minimum delay embedding problems are NP-complee. Some srucure in he DAG can be exploied o provide polynomial ime algorihms for our problems. If he DAG is a ree, which is he assumpion in much of he exan lieraure, he minimum cos embedding is similar o shores pah algorihms [23], [28]. To he bes of our knowledge, minimum delay embeddings are no considered in he lieraure and in Secion IV we provide a polynomial ime algorihm o find he minimum delay embedding when he compuaion schema is a ree. Nex we consider wo large classes of compuaion graphs (1) layered graphs, (2) bounded reewidh graphs. We derive he moivaion for layered graphs from disribued daa processing frameworks like MapReduce [2] and Dryad [3]. In Secion V we provide a polynomial ime algorihm o find he minimum cos embedding when he DAG is a layered graph. This same algorihm also obains an approximaion for he minimum delay embedding. In Secion VI, we show ha he algorihm from Secion V can find he minimum cos embedding when he DAG is a bounded reewidh graph. The noaion and he formal problem definiion is described in he nex secion. In Secion VII, we describe an updae mechanism when here is perurbaion in he DAG and we conclude wih a discussion in Secion VIII. II. PRELIMINARIES The communicaion nework is represened by an undireced conneced graph N = (V, E) wih V being he se of n nodes and E being he se of m edges. The elemens of V are denoed by {u 1,..., u n }. Each edge (u i, u j ) E has a non negaive weigh T (u i, u j ) associaed wih i. The weigh could, for example, correspond o ransmission ime of a bi on he link, or he energy required o ransmi one bi on he link or somehing more absrac. For a given T, and any u i, u j V le d ui,u j be he weigh of he minimum cos pah from u i o u j. Le [[D = d ui,u j ]] be he n n disance marix. Of he n nodes in N, here are K source nodes denoed by {s 1, s 2,..., s K } V. Source node s i generaes daa x i ; denoe x = {x 1,..., x K }. A sink node V requires o obain a funcion f(x 1, x 2,..., x K ) of he daa. We assume ha schema o compue f(x 1, x 2,..., x K ) is given and is represened by a direced acyclic graph G = (Ω, Γ), where Ω is he se of p nodes and Γ is he se of q edges. The nodes in G are denoed by {ω 1,..., ω p } and correspond o operaions ha need o be performed on he inpu daa o he node and he ougoing edges denoe he flow of he resul of hese operaions. Thus each edge in G represens a sub-funcion of he inpus. The sources in he compuaion graph are denoed by nodes {ω 1, ω 2,..., ω K } wih node ω i corresponding o source x i ; node ω p is he sink ha receives he funcion f(x 1, x 2,..., x K ). The direcion on he edges in G represen he direcion of he flow of he daa. Each edge (ω i, ω j ) Γ has a non negaive weigh W(ω i, ω j ) associaed wih i which could correspond o he number of bis used o represen he inermediae funcion. Since G is a direced acyclic graph here is a parial order associaed wih is verices. If (ω i, ω j ) Γ hen he funcion a ω j canno be compued unil he funcion a ω i is compued and he resul forwarded o ω j. Le Φ (ω) and Φ (ω) denoe, respecively, he immediae predecessors and successors of verex ω, i.e., Φ (ω) = {τ Ω (τ, ω) Γ} and Φ (ω) = {τ Ω (ω, τ) Γ}. A processing cos (delay) funcion P : Ω V R + is used o capure he cos (delay) of performing a paricular operaion on a paricular verex of he nework. Now we define he embedding of G on N as follows. Definiion 1: An embedding of a compuaion graph G on a communicaion nework N is a many-o-one funcion E : Ω V which saisfies he following condiions. 1) E(ω i ) = s i for i = {1,..., K} 2) E(ω p ) =. In his definiion of embedding each node in he compuaion graph is mapped o a single node in he nework graph and he edge (ω i, ω j ) Γ is mapped o he shores pah beween E(ω i ) and E(ω j ). Alernae definiions of an embedding are possible, e.g., an edge in G can be mapped o more han one pah in N while saisfying some coninuiy consrains; his is he definiion of an embedding in [23]. An embedding defines a compuaion and communicaion sequence in N o obain f a he sink. Le E be he se of all embeddings of G in N which follow Definiion 1. The weigh funcions T and P can be reaed as, respecively, he communicaion and he processing delays in N for compuing and communicaing he sub-funcions leading o compuaion of f. We can define he delay in compuing a sub-funcion by a node ω Ω in he embedding E as d(e(ω i )) := max [d(e(ω j)) + W(ω j, ω i )d E(ωj),E(ω ω j Φ (ω i)]+ i) P(ω i, E(ω i )). Recall ha d u,v is he lengh of he shores pah beween verices u, v V ; hus he firs erm here corresponds o he delay in obaining all he operands a node E(ω i ) and he second erm corresponds o he processing delay a he node. Seing he delay a he sources o zero, i.e., d(e(ω i )) = 0 for all i [1, K], we can recursively calculae he delay of each verices of G on N. The delay d(e) of an embedding E is defined as he delay of he sink, i.e., d(e) := d(e(ω p )). This leads us o he firs problem ha we consider in his paper: Find an embedding Eop d such ha he delay of he embedding is minimum among all he embeddings for a given G, N, T, W and P, i.e., solve he opimizaion problem Eop d := arg min d(e). E E (1) (MinDelay) Alernaively, considering he weigh funcions T, W, and P as cos of communicaion and compuaion, e.g., he energy

4 4 ω 1 ω 2 ω 3 + ω 4 + ω 5 ω 6 s 1 s 2 s a b c d 1 ω 1 ω 2 ω 3 α 1 ω 4 α 2 α 3 α 4 s 1 s 2 0.5a 0.1a a 0.7a u 1 v 1 ɛ ɛ s 3 ɛ ɛ s 4 u 2 v 2 s 2l 1 s 2l ω 7 (a) (b) ω 2l 1 ω 2l α 3l 4 α 3l 3 0.5a 0.1a a 0.7a s 1 s 2 s 3 ω 4 b ω 5 (c) ω 6 s 1 s 2 s 3 ω 4 ω 5 c Fig. 2: (a). Compuaion graph G for f = (x 1 + x 2 )(x 2 + x 3 ) (b). Communicaion Nework N. The numbers near he edges represen he weighs on hem. (c). Minimum delay embedding E 1. (d). Minimum cos embedding E 2. cos, he cos of an embedding can be defined as C(E) := P(ω, E(ω)) + ( ) W(ωi, ω j )d E(ωi)E(ω j). ω Ω (ω i,ω j) Γ (2) We can hen find an embedding Eop c such ha he cos of he embedding is minimum among all he embeddings for a given G, N, T, W and P, i.e., solve he opimizaion problem Eop c := arg min C(E). E E (d) ω 6 (MinCos) The following example illusraes he preceding problems and he sysem se up. Example 1: Consider a compuaion graph G = (Ω, Γ) and communicaion nework N = (V, E) shown in Fig. 2. The labels of each verex in boh he graphs are shown in Fig. 2. The processing cos (delay) for sources is assumed o be zero and for oher verices of G i is assumed o be uniy. An embedding E of G on N will have E(ω i ) = s i i [1, 3] and E(ω 7 ) =. Now consider wo embeddings E 1, E 2 such ha E 1 (ω 4 ) = a, E 1 (ω 5 ) = c, E 1 (ω 6 ) = d and E 2 (ω 4 ) = a, E 2 (ω 5 ) = b, E 2 (ω 6 ) = d. These are shown in Figs. 2c and 2d. Using (2), i is easy o verify ha C(E 1 ) = 36 and C(E 2 ) = 34. The delays in he embedding E 1 are: d(e 1 (ω 4 )) = max(10, 2) + 1 = 11, d(e 1 (ω 5 )) = max(8, 10) + 1 = 11, d(e 1 (ω 6 )) = max(12, 12) + 1 = 13 and finally d(e 1 ) = d(e 1 (ω 7 )) = = 14. Similarly, d(e 2 ) = 16. Observe ha he delay of E 1 is lower among he wo bu is cos is higher han ha of E 2. Now we presen an example which shows ha he difference beween he delay obained by he soluion of MinCos problem and ha of he MinDelay problem can be of he order of he number of sources in he nework. ω p (a) α 3l 2 u l+1 v l+1 ɛ ɛ Fig. 3: Illusraing Example 2. (a). Compuaion graph wih 2l sources and a sink. (b). Communicaion nework. Example 2: Consider he compuaion graph G = (Ω, Γ) and a nework graph N = (V, E) as shown in Figs. 3a and b respecively. The labels of he verices are shown in he figure and he numbers near he edges of N represen he weigh of ha edge. Noe ha he srucure beween s 1 o u 2 and s 2 o v 2 is repeaed in he nework graph l imes. Le us call his srucure as A. We assume ha he weighs on he edges of G are all one and he processing coss are zero for sources and for all oher verices i is assumed uniy. Any embedding E of G on N will have E(ω i ) = s i i [1, 2l] and E(ω p ) =. Consider an embedding E 1 such ha E 1 (α 1 ) = u 1, E 1 (α 2 ) = u 2, E 1 (α 3 ) = v 2,..., E(α 3l 2 ) = u l+1. Le us compue he cos of his embedding. Noe ha he oal cos of he embedding is l imes he cos coming from he srucure A plus he weigh of edge u l+1. The cos due o embedding of A is he sum of he weighs of edges s 1 u 1, s 2 u 1, u 1 u 2, u 1 v 2 and he processing coss a u 1, u 2, v 2. Hence he cos of he embedding is C(E 1 ) = l( a + 2ɛ) + ɛ. Similarly he delay of his embedding is d(e 1 ) = (a+ɛ+2)l+ɛ. Now consider anoher embedding E 2 such ha E 2 (α 1 ) = v 1, E 2 (α 2 ) = u 2, E 2 (α 3 ) = v 2,..., E 2 (α 3l 2 ) = v l+1. The cos and delay for his embedding can be compued in a similar fashion o obain C(E 2 ) = l( a + 2ɛ) + ɛ and d(e 2 ) = l(0.7a + ɛ + 2)l + ɛ. I can be shown ha he firs embedding E 1 is he soluion of MinCos where as E 2 is he soluion of MinDelay problem of G on N. If we use he soluion of MinCos o ge he soluion of MinDelay hen he difference would be d(e 1 ) d(e 2 ) = 0.3al which is of he order of he number of sources in he graph. (b) III. HARDNESS OF EMBEDDING We begin by considering MinDelay. Firs consider he case when here is no processing delay, i.e., P(ω, E(ω)) = 0 in he nework and he compuaion graph is unweighed, i.e., W(ω i, ω j ) = 1 (ω i, ω j ) Γ. The delay of he embedding in his case is he delay of he longes embedded pah from any source o he sink in N. Le d si be he delay of he minimum delay pah

5 5 beween source s i and he sink in N. Then he delay of any embedding of G on N which follows he condiions of Definiion 1 has o be more han he delay on he longes of all he minimum delay pahs from sources o sink. In oher words, d(e) max i [1,K] (d si). Now consider an embedding E which maps all he inermediae verices of G o he sink in N. The delay of his embedding will be d(e ) = max i [1,K] (d si). Comparing i wih d(e) ells us ha he embedding E minimizes he delay. Hence MinDelay is easy o solve if here are no processing delays and G is unweighed. Now we analyze he hardness of MinDelay and MinCos for arbirary G and N and show ha he boh he opimizaion problems are NP-hard. We prove he hardness of he opimizaion problems by proving ha he corresponding decision versions are NP-hard. The decision versions of he MinDelay and MinCos are defined as follows: Definiion 2: For a given G, N, T, W, P and a posiive number L he decision version of he MinDelay problem oupus yes if here exiss an embedding E of G on N such ha d(e) L and oupus no if no such embedding exiss. Definiion 3: For a given G, N, T, W, P and a posiive number L he decision version of he MinCos problem oupus yes if here exiss an embedding E of G on N such ha C(E) L and oupus no if no such embedding exiss. Noe ha if one can solve he opimizaion version of MinDelay (resp. MinCos) problem in ime, say, hen using he soluion of ha problem we can solve he corresponding decision version in ime O(). Hence he original opimizaion problem is aleas as hard as is decision version. This implies ha if we jus prove ha he decision version of he MinDelay(resp. MinCos) is NP-hard hen he opimizaion version is also NP-hard. We now proceed o prove ha he decision version of our opimizaion problems are indeed NP-complee. Theorem 1: The decision version of MinDelay problem is NP-complee. Proof: We firs prove ha he decision version of MinDelay is NP-hard by giving a reducion from he NP-complee problem Precedence Consrain Scheduling wih fixed mapping (PCS-FM) [36]. PCS-FM problem is defined as follows. Given a se of asks H wih a parial order on i, ask h having lengh l(h) = 1 h H, a se τ H, and m Z + processors, find a schedule σ of asks on he processors which mees an overall deadline D, maps each τ i τ o a paricular processor p(τ i ) and obeys he precedence consrain ha if h i h j hen σ(h j ) σ(h i )+1. To he bes of our knowledge he hardness of PCS-FM problem has no been proved in he lieraure and we provide he proof of NP-compleeness of PCS-FM in Appendix A. We firs give a reducion from an insance φ = (H,, τ, p(τ), m, l, D ) of PCS-FM o an insance of Min- Delay ψ = (G, N, γ, λ, T, P, D), where G = (Ω, Γ) and N = (V, E) are he compuaion and communicaion graphs respecively. The se γ Ω is o be mapped o λ V under any embedding E and T, P are he communicaion and processing delay of he nework. Noe ha he se of asks H along wih he parial order creaes a DAG and we define G =. We define N o be a complee graph on m processors. Le γ = τ and λ = p(τ). The ransmission delay T beween any wo verices of N is aken as ɛ = 1 H 2 and P(ω, u) = 1 for all ω Ω, u V. Finally D = D + 1. We have o prove ha here is a schedule σ of φ which finishes in ime D if and only if here is an embedding E of ψ wih delay D D D + 1. The forward direcion is easy o prove. If here is a schedule of φ which finishes in D ime and maps a ask h H o a processor q hen we can creae an embedding of ψ which maps he same verex h Ω o a verex q V. Noe ha because γ = τ and λ = p(τ) he condiions of Definiion 1 are me in his embedding. The delay of any verex u G in his embedding will be a mos he ime a which he ask u finishes in he schedule σ plus he number of imes edges in N are used because of he same precedence order. Hence he delay of his embedding will be D < D < D + H ɛ D + 1 H < D + 1. To complee he proof we need o prove ha if here is an embedding E of ψ of delay α R + hen here is a schedule σ of φ which finishes in ime α. We will creae a schedule σ from he embedding E. If a verex h Ω is mapped o a verex u V hen in he schedule σ also he ask h is execued by processor u. Because γ = τ and λ = p(τ) he asks in τ are mapped o p(τ) in his schedule also. Le he oal number of edge uses in his embedding be b. Noe ha b < H 2 in any embedding where H are he number of verices in graph G. The oal ransmission delay in he embedding is bɛ < 1. Recall ha he processing delays are all 1 hence he delay of he embedding can be wrien as α = α + bɛ. I is easy o verify ha he ime required by he schedule σ o complee is α. This proves ha he decision of MinDelay is NP-hard. Given an insance of MinDelay problem an embedding E can be guessed non-deerminisically and checked wheher d(e) L in polynomial ime. Thus he decision version of MinDelay problem is in NP and our reducion proves ha i is in fac NP-complee. We now look a he problem of approximaing he MinDelay problem. We say ha a polynomial ime approximaion algorihm has a performance guaranee α > 1 if i oupus a feasible soluion of he problem which has delay a mos α imes he opimal soluion. We prove he following: Theorem 2: Unless P=NP, an insance of he MinDelay problem wih (G, N, T, W, P) and wih uni processing delays and uni weighs on he edges of G, does no have a polynomial-ime approximaion algorihm wih performance guaranee sricly less han 100/99 if is soluion is greaer han 10. Proof: Noe ha while proving he hardness of MinDelay we firs reduced an insance of a PCS problem o an insance of a PCS-FM problem and hen we reduced PCS-FM o MinDelay problem. Le op 1 and sol 1 be he deadline achieved by he opimal and a feasible soluion of PCS problem respecively. Similarly, le op 2 and sol 2 be he deadline achieved by he opimal and a feasible soluion of he PCS-FM problem. Finally, le op 3 and sol 3 be he opimal and feasible soluions for MinDelay. While proving he NP-compleeness of PCS- FM (in Appendix A) we showed ha op 2 = op We also showed ha if a schedule of PCS problem achieves a deadline

6 6 sol 1, hen here is a schedule of PCS-FM wih sol 2 = sol Le PCS-FM have a polynomial-ime approximaion algorihm wih soluion sol 2 = x op 2. By Observaion 5.1 of [37] we know ha he PCS problem does no have polynomial-ime approximaion algorihm wih performance guaranee less han 4/3. Hence, sol 1 > 4 3 op 1. Subsiuing he relaion beween sol 1 and sol 2 in he above equaion we ge, sol 2 2 > 4 3 op 1. Thus, xop 2 > 4 3 op and x > 4 op 1 3 op op. We now 2 observe ha op 1 1 op 2 3 and hence op 1 > 1 3 op 2 we can wrie, x > 4 op 1 3 op + 1 op 1 2 op and finally x > Hence, unless P=NP, PCS-FM canno have a polynomial ime approximaion algorihm wih performance guaranee less han 10/9. While proving he hardness for MinDelay we showed ha for any insance of PCS-FM wih sol 2 we can ge an insance of MinDelay problem wih soluion sol 3 such ha sol 2 sol 3 sol Le MinDelay have a polynomial ime approximaion algorihm wih soluion sol 3 = y op 3. We know ha sol 2 > 10 9 op 2. Subsiuing he relaion beween sol 3 and sol 2 in he above expression we ge, sol 2 > 10 9 op 2 and sol 3 = yop 3 > 10 9 op 2. This implies ha y > 10 op 2 9 op. 3 Observe ha, if op 2 10 hen op 3 op (because of he reducion) implies ha op 2 > op 3. So we ge y > 100/99. This complees he proof. We now consider he decision version of MinCos. Remark 1: Recall ha he cos of an embedding E is compued using (2) which does no consider he direcion on he edges of G. I only considers he weigh W on any edge of G. Thus he cos of an embedding does no depend on he direcions of he edges and he soluion of MinCos problem is same irrespecive of wheher he compuaion graph G has direcions or no. Theorem 3: The decision version of he MinCos is NPcomplee. Proof: We acually prove ha he decision version of MinCos is NP-hard even when he processing coss are zero and he coss on he edges of G and N are all one. In his case he cos of he embedding E is given by C(E) := (ω d i,ω j) Γ E(ω i)e(ω j), where d uv is he shores disance beween he verices u, v V. We prove he hardness of he decision version of MinCos by giving a reducion from he unweighed version of Mulierminal Cu problem which is NP-complee [38]. Mulierminal Cu problem is defined as follows: Given an arbirary graph = (V, E ) and a se S = {s 1, s 2,..., s k } V of k specified verices, find he minimum number of edges E s E such ha he removal of E s from E disconnecs each verex in S from all he oher verices in S. The cos of an embedding can be compued in polynomial ime using (2). Hence, given an insance of he decision version of he MinCos problem, one can guess an embedding in a non-deerminisic way and check wheher is cos is less han L or no in polynomial ime. Thus he decision version of he MinCos problem is in NP. To prove he NP-hardness of he problem we will firs show a ransformaion of an insance of Mulierminal Cu problem ψ = (N, S, D) o an insance of he decision version of MinCos φ = (N, G, γ, λ, D). Then we will show ha here exiss a se of edges in N of size D N which separaes all he verices of S from all oher verices in S if and only if here is an embedding E such ha a verex γ i γ is mapped o a verex λ i λ and has cos equal o D. From ψ define an insance of decision version of MinCos φ as follows: Le N be a complee graph on V = {u 1,..., u k } where k = S, and G = N. Noe ha in his case G is undireced. Define γ = S and λ = V such ha a verex γ i is mapped o u i V. In oher words, k disinc verices of Ω are mapped o disinc verices of V. Now we prove ha here is an embedding E of cos D if and only if ψ has a Mulierminal Cu of size equal o D. The if par is easy o see. If here is a minimum Mulierminal Cu E s of size D which divides he verex se V = V 1... V k ino k disjoin subses hen define E such ha each verex in V i is mapped o u i V. Then he cos of he embedding is he oal number of edges which go from V i o V j for i j. This is nohing bu he size of he se E s which is equal o D. To complee he proof we need o show ha if here is no minimum Mulierminal Cu of ψ of size D hen here is no embedding (which maps he verices of γ o verices of λ) of φ wih cos D. Le us assume ha here is no minimum Mulierminal Cu of ψ of size D bu here is an embedding for φ wih cos D. I implies ha here is a mapping of Ω on k differen verices of V such ha γ i = s i is mapped o u i. Le us denoe all he verices of Ω ha are mapped o u i by Ω i. The cos of he embedding is equal o he number of edges beween ses Ω i and Ω j for i j which is equal o D. Now i is easy o see ha if we divide he verices of N in k disjoin subses such ha all he verices in Ω i are in he same se hen we can creae a Mulierminal Cu of he graph N which has cos exacly D. Bu his conradics our assumpion. Hence if here is an embedding funcion E for φ of cos D hen here is a Mulierminal Cu of ψ of same size. This proves ha he decision version of MinCos is NP-hard when he compuaion graph is undireced. From Remark 1, hus he decision version of MinCos wih direced compuaion graph is also NP-hard. So far, we have no considered any weigh funcions. From [38] we know ha he Mulierminal Cu problem for weighed graphs is also NP-complee. And a simple modificaion in our reducion will prove ha decision version of MinCos problem wih weigh funcions is also NP-hard. I is also shown in [38] ha Mulierminal Cu problem is MAX SNP-hard. To prove any problem being MAX SNP-hard i is sufficien o give a linear reducion o i from a known MAX SNP-hard problem. The linear reducion is defined as follows: Definiion 4: Le Π and Π be wo opimizaion problems. Then we say ha Π linearly reduces o Π if here are wo polynomial ime algorihms A, B and wo consans α, β > 0 such ha 1) Given an insance π of Π wih an opimal cos op(π) an algorihm A produces an insance π = A(π) of Π such ha he cos of an opimal soluion for π (op(π )) is a mos α op(π), i.e., op(π ) α op(π). 2) Given π, π = A(π) and any soluion y of π here is an algorihm B which produces a soluion x of π such ha cos(x) op(π) β cos(y) op(π ).

7 7 Noe ha in he proof of Theorem 3 we use polynomial ime algorihms o reduce an insance ψ of he Mulierminal Cu problem o an insance φ of he MinCos problem and we proved ha cos(ψ) = cos(φ). We can also ge a soluion of φ from a soluion of ψ and vice versa wih parameers α = β = 1. Hence he reducion we used o prove ha MinCos is NP-complee is in fac a linear reducion of Mulierminal Cu problem o MinCos. We hus have he following corollary. Corollary 1: Problem MinCos is MAX SNP-hard and hence does no have any polynomial ime approximaion scheme unless P=NP [39]. The NP-hardness of MinCos can also be proved by reducion from anoher well known NP-complee problem k-clique [36]. The k-clique problem is defined as follows: Given an arbirary graph N and a posiive ineger k check wheher N = (V, E ) has a clique (or a complee subgraph) of size k. Since we know ha he k-clique problem is W [1] complee [40], he reducion from i also implies ha MinCos does no have any fixed-parameer racable algorihm and is also hard for W [1]. A variaion of MinCos is considered in [35] in ha he communicaion coss (compued as W(ω i, ω j ) d E(ωi)E(ω j) in his paper) are no necessarily zero if E(ω i ) = E(ω j ). Furher, he source and he sink nodes are no fixed like in his paper. The key complexiy resul is Lemma 2.1 which shows ha heir variaion of MinCos wih zero-one communicaion coss is NP-complee by reducing i from he planar 3-SAT problem. Their proof echnique does no allow he communicaion o be necessarily zero if E(ω i ) = E(ω j ). For non zeroone communicaion coss, [35] also gives a polynomial ime algorihms for heir variaion when G is a parial k-ree and almos rees wih parameer k. As is he case wih many NP-complee problems our problems also become racable when G has special srucures. We consider hree such srucures ha have wide applicaions he ree, layered graphs and bounded reewidh graphs. These are considered nex. IV. G IS A TREE Many funcions ha are useful on sensor neworks, e.g., average, maximum, minimum ec., can be represened by direced ree graphs. Operaions ha are required o resolve many daabase queries can also be represened as direced ree srucures. The rees represening a G is from a class of rees ha have a se of leaf verices whose in-degree is zero and a roo verex whose ou-degree is zero. We consider a ree srucured G such ha all he leaf verices represen he sources of daa and he roo acs as he sink which wans o know he final funcion value. Recall ha we label all he verices in Ω as {ω 1,..., ω p }. Also Φ (ω i ) and Φ (ω i ) represen he predecessor and successor verices of he verex ω i Ω. I is easy o verify ha in his ype of ree srucured compuaion graph, every verex (excep he roo) has only one successor verex, i.e., for any ω Ω (excep he roo) he se Φ (ω) is a singleon se. The se Φ (ω p ) is null, where ω p is he roo and here is a unique pah from each source o he sink. As we have menioned earlier, [28] and [23] adap, respecively, he Bellman-Ford and he Dijksra shores pah algorihms o solve MinCos when G is a ree. In he following we will describe Algorihm 1 ha solves MinDelay when G is a ree. Algorihm overview: Algorihm 1 is a cenralized algorihm which assumes knowledge of he all-pair shores pah delay marix D of he communicaion nework N. The opimal embedding is compued by ieraing hrough all he edges of he compuaion graph G. For an edge (ω i, ω j ) of G, he algorihm compues he opimal delay of he pah leading o he verex ω j from sources via ω i for all possible mappings of he verex ω j in he nework. The delay ill any verex ω j is he maximum of he opimal delays of all he pahs reaching o ω j plus he processing delay a ha verex. Once he opimal delay for he sink node is compued, he algorihm backracks o find he opimal mapping of all he oher verices. Algorihm Descripion: In each ieraion he Algorihm 1 mainains he following daa srucures: Algorihm 1: Opimal embedding algorihm o solve Min- Delay for ree graphs Inpu: Nework graph N = (V, E), V = n, E = m, Weigh funcion T : E R +, Tree compuaion graph G = (Ω, Γ), Ω = p, Γ = p 1, Weigh funcion W : Γ R +, Cos funcion P : Ω V R +. Oupu: Embedding E wih minimum delay under T, W, and P. 1: [[D = d uv ]] // n n disance marix for N. // Iniializaion of ables 2: h l (v) := 0 v V, l [1, p 1] ; 3: f l (u, v) := 0 u, v V, l [1, p 1] ; 4: for l = 1 o K do 5: for all v V do 6: f l (s l, v) := W(ω l, Φ (ω l ))d uv ; 7: h l (v) f l (s l, v) ; 8: x l (v) s l ; 9: end for 10: end for 11: for l = K + 1 o (p 1) do 12: for all v V do 13: for all u V do 14: f l (u, v) := P(ω l, u) + W(ω l, Φ (ω l ))d uv ; 15: end for 16: h l (v) min {max[h i (u) ω i Φ (ω l )] + f l (u, v)} u ; 17: x l (v) arg min {max[h i (u) ω i Φ (ω l )] + f l (u, v)} ; u 18: end for 19: end for 20: d(e) := max[h i () ω i Φ (ω p )] + P(ω p, ) ; 21: E(ω p ) = ; // Backracking 22: for l = (p 1) o 1 do 23: E(ω l ) = x l (E(Φ (ω l ))) ; 24: end for 1) f l (u, v) : I is he delay associaed wih edge

8 8 (ω l, Φ (ω l )) and verex ω l when ω l and Φ (ω l ) are mapped o verex u and v, respecively. 2) h l (v) : I is he opimal delay of he pah leading o verex Φ (ω l ) (via ω l ) when i is mapped o v N. The algorihm also sores he mapping of verex ω l in x l (v) corresponding o his value. Afer iniializing hese daa srucures o zero (lines 2, 3) he algorihm complees in he following wo seps. Lines 4 10 : This is he ieraion over all he sources in G. As he mapping of source ω i G is fixed o source s i N, here we jus calculae he minimum delay required o reach o all he verices from he source s i. Noe ha he processing delay a he source is zero, i.e., P(ω i, s i ) = 0. Lines : This is he main loop of he algorihm which runs over all he remaining verices of G saring from ω K+1 o ω p 1. In each ieraion f l (u, v) is updaed for all possible mappings of verices ω l, Φ (ω l ) (lines 13 15). The funcion f l (u, v) is compued by adding he following delay erms. P(ω l, u) : This is he processing delay associaed wih he verex ω l when i is performed a verex u V. W(ω l, Φ (ω l ))d uv : This is he communicaion delay associaed wih he edge (ω l, Φ (ω l )) when mapped o u and v respecively. Then h l (v) is updaed in line 16. Noe ha max[h i (u) ω i Φ (ω l )] is he minimum delay ill he verex ω l when i is mapped o u. This is equivalen o he firs erm in righ hand side of (1) (Secion II, Page 3). This along wih he processing delay P(ω l, u) gives he delay of he verex ω l when mapped o u. As menioned earlier he algorihm also sores he mapping u in x l (v) which minimizes he h l (v). Lines : Once all he delays are compued he algorihm compues he delay a he sink verex (as he mapping of ω p is fixed o in embedding E) and finds he mapping of verices of G on N which gives his value by backracking from he sink o he sources. 1) Analysis of Algorihm 1: Theorem 4: Algorihm 1 solves MinDelay when G is a ree and runs in O(pn 2 ) ime. Recall ha p and n are he number of verices in G and N respecively. Proof: We give he proof of correcness of Algorihm 1 only when he compuaion graph is unweighed. The proof can easily be exended o he case when here are weighs on he edges of G. Recall ha he delay of an embedding is defined recursively over all he verices of G saring from he sink verex. I is sufficien o prove ha a any ieraion l he algorihm compues he opimal delay of embedding he pah from any source ω i o an inermediae verex Φ (ω l ) via ω l for all possible embeddings of Φ (ω l ). And hen a he end i chooses he fixed mapping of he sink ω p and races back he opimal pahs from sink o all he sources via he inermediae verices. We will prove his inducively. Le x i be he assignmen of ω i in N and a any ieraion Φ (ω l ) = ω j for some j [1, p]. The opimal pah from any source ω i for i [1, K] o is successor verex Φ (ω i ) = ω j is jus he shores pah disance beween s i and he verex o which Φ (ω i ) will evenually be mapped. This is equal o d si x j. I is easy o verify ha in he algorihm (line 11) his value is sored in h l (v) l [1, K] daa srucure for all v V. Assuming ha he opimal delays ill he (l 1) h run are calculaed by he algorihm and sored in h i s we will show ha a l h run he algorihm compues he opimal delay. The opimal delay of he pah { from sources } o Φ (ω l ) via ω l is given by g l ( x j ) := min xl d( xl ) + d xl x j, where, d( xl ) is he opimal delay of he pah ill ω l. The opimal delay d( x l ) can furher be expanded and wrien{ in erms of he delay of is predecessors } as: d( x l ) := min xl max [d( x i) + d xi x l ] + P(ω l, x l ). ω i Φ (ω l ) Subsiuing value of d( x l ) in g l ( x j ) we ge { g l ( x j ) = min x l max ω i Φ (ω l ) g i( x l ) + P(ω l, x l ) + d xl x j }. (3) Recall he line 14 of Algorihm 1 which compues f l (u, v) = P(ω l, u) + d uv. This is nohing bu he las wo erms of he righ hand side of (3) when x l = u and x j = v. Now observe he line 16 of Algorihm 1 which compues h l (v) = min {max[h i (u) ω i Φ (ω l )] + f l (u, v)}, where u h i (u) is he opimal delay of he pah leading o ω l via is predecessor ω i. The opimal delay of he pah h i (u) = g i ( x l ) for x l = u V. Hence Algorihm 1 indeed compues he opimal delay of he pah leading o Φ (ω l ) via ω l for all possible mappings v V of Φ (ω l ) and sores i in h l (v) a ieraion l. Recall ha G is a ree hus he oal number of edges in i are p 1 and Algorihm 1 is execued once for each edge. In each ieraion i compues he delay of an edge for all possible mappings of is end poins which requires n 2 (line 14) ime where n is he number of verices in N. Then i adds he delay o he delay of is predecessors and chooses he one wih minimum value (h l (v) in line 16) which requires O(n) ime. Hence he ime o complee one ieraion is O(n 2 +n) = O(n 2 ). The oal ime complexiy of Algorihm 1 is O(pn 2 ). V. G IS A LAYERED GRAPH We now consider he case when G is a layered graph. An example of a layered compuaion graph is shown in Fig. 4. We assume ha here are r layers and number of verices in each layer is a mos k. The verices a layer l are labelled ω l1, ω l2,..., ω lk. The direced graph has edge (ω ai, ω bj ) only if b = a or a + 1. Here we also assume ha all he sources are a layer one and here is only one sink on he las layer. We derive our moivaion for hese kind of compuaion graphs from he MapReduce applicaion framework. In MapReduce framework each user comes o a nework of processors wih a se of Map and Reduce asks. There is a precedence order beween Map and Reduce asks. Each Reduce ask canno be sared unless he processing of corresponding se of Map asks is finished. Each ask akes predefined ime o finish and he oupus of he Map asks are used by he corresponding Reduce asks. This dependency can be represened by a direced graph wih edges showing he dependency beween he wo asks. The aim in his seing is o embed he Map and Reduce asks on he processors such ha he oal ime of compuaion and communicaion

9 9 Number of layers = r File A abc xyz pqr abc File B xyz abc def pqr abc xyz pqr abc xyz abc def pqr Spliing Widh of a layer = k Fig. 4: Layered compuaion graph is minimized. We explain our moivaion wih he following example from [2]. Example 3: Consider a ypical daabase query by a server (call i sink) o check he number of occurrences of differen words in wo large files which are available a wo separae servers. The ask of calculaing he number of occurrences of each word inside he files can be divided ino he following sub-asks. Spliing: Firs he files are spli ino smaller sub-files such ha each sub-file can be processed by a processor in he nework. Mapping: Each sub-file is hen parsed o ge he number of imes each word occurred in i. Shuffling and reducing: Once he coun from each sub-file is available hen he couns of one word are ranspored o one processor o compue he final coun of ha word. Each processor adds all he individual couns and resuls he final coun of each word. Final resul: Finally he resul is ranspored o he node which asked his query. Fig. 5 represens a ypical MapReduce daa flow diagram for his problem. The aim is o deermine he processors for each of he sub-asks such ha he ime o answer he query a he sink is minimized. The whole process can be represened by a direced layered graph wih each layer represening one sub-ask and a verex a a layer represening a paricular subask. Observe ha he edges in his graph are only beween he consecuive layers and he operaions a a verex canno sar unil he daa from all is predecessors is available. Now we presen an algorihm which solves he MinCos problem for layered graphs in polynomial ime. Algorihm overview: Like Algorihm 1, Algorihm 2 also has wo phases a forward pah and a backward racking. The forward pah is a dynamic program ha ieraes over all he layers of he compuaion graph. In he firs ieraion i compues he opimal mappings of all he verices of layer 1 corresponding o a possible mapping of he verices of layer 2. If a verex is a source verex hen is mapping is always fixed o he corresponding source verex in N. Similarly, in ieraion l i compues he opimal mapping of verices of layer l each corresponding o a possible mapping of verices of layer l+1. I also compues he opimal cos ill layer l+1 for every possible mapping of verices of layer l+1. Once i reaches he abc 1 xyz 1 pqr 1 abc 1 xyz 1 abc 1 def 1 pqr 1 Mapping abc 3 xyz 2 pqr 2 def 1 Reducing abc 3; xyz 2 pqr 2; def 1 Final oupu on a server Fig. 5: Sub-asks and daa flow diagram of a ypical daabase query in MapReduce framework las layer he algorihm chooses he mapping of he verices of las layer which minimizes he overall cos and backracks o ge he corresponding mappings of all he previous layers. Algorihm Descripion: Algorihm 2 ieraes over he layers in G and in each ieraion i mainains he following wo daa srucures: 1) f l (X i, Y j ), he cos of embedding all he verices ill layer l when he verices a layer (l + 1) are placed a Y j and verices of layer l are placed a X i, where X i, Y j V of size k. 2) h l (Y j ), he opimal cos of embedding all he verices (and corresponding edges) ill layer l when he verices a layer (l +1) are mapped o an ordered subse Y j V of size k. Afer iniializing hese daa srucures o 0 (in line 5, 6) he algorihm complees in he following hree seps. Lines 7 15: This is he main loop of he algorihm which runs for he firs layer (from sources) o las bu one layer. A each layer l he daa srucure f l (X i, Y j ) is updaed for all possible combinaions of k size subses X i and Y j of V (lines 7 11). The following cos erms are added ogeher o calculae f l (X i, Y j ) along wih he opimal cos ill layer l 1, h l 1 (X i ). X i P(ω lu, a u ) : Cos of puing compuaion node u=1 a u X i ω lu Ω a a u V for each node a he curren layer. W(ω lu, ω (l+1)v )d aub v : Toal communica- a u X i,b v Y j (ω lu,ω (l+1)v ) Γ ion cos, when node ω lu Ω is placed a node a u V and ω (l+1)v Ω is placed a b v V, is he muliplicaion of corresponding coss in compuaion graph (weigh W) and communicaion graph (weigh T ). This erm capures he cos for all he edges beween layer l and layer l + 1. a u,b v X i (ω lu,ω lv ) Γ W(ω lu, ω lv )d aub v : Toal communicaion cos, when node ω lu Ω is placed a node a u V and ω lv Ω is placed a b v V, is he muliplicaion of corresponding coss in compuaion graph (weigh W) and

10 10 Algorihm 2: Opimal embedding algorihm for layered graphs Inpu: Nework graph N = (V, E), V = n, E = m, Weigh funcion T : E R +, Layered compuaion graph G = (Ω, Γ), Ω = p, Γ = q, Weigh funcion W : Γ R +, Cos funcion P : Ω V R +. Oupu: Embedding E wih minimum cos 1: [[D = d ui,u j ]] // n n disance marix for N. 2: X := {X i : X i V and X i = k} // X = n k, X i = {a 1, a 2,..., a k } 3: Y := {Y i : Y i V and Y i = k} // Y = n k, Y i = {b 1, b 2,..., b k } 4: Z i := [z i1, z i2,..., z ik ] // z ij V, is he assignmen of node ω ij Ω in opimal embedding // Iniializaion of ables 5: h l (X i ) := 0 X i X, l [1, r] ; 6: f l (X i, Y j ) := 0 X i X, Y j Y, l [1, r] ; 7: for l = 1 o (r 1) do 8: for j = 1 o Y do 9: for i = 1 o X do 10: f l (X i, Y j ) := X i P(ω lu, a u ) + u=1 a u X i,b v Y j a u X i (ω lu,ω (l+1)v ) Γ W(ω lu, ω lv )d aub v + h l 1 (X i ) ; a u,b v X i (ω lu,ω lv ) Γ 11: end for 12: h l (Y j ) min l(x i, Y j ) ; X i X 13: x l (Y j ) arg min f l (X i, Y j ) ; X i X 14: end for 15: end for 16: for i = 1 o X do 17: h r (X i ) := a u,b v X i,(ω ru,ω rv) Γ 18: end for 19: C(E) := min h r(x j ) ; X j X 20: Z r = {z r1,..., z rk } = arg min h r (X j ) ; X j X // Backracking 21: for l = r 1 o 1 do 22: Z l = x l (Z l+1 ) ; 23: end for W(ω lu, ω (l+1)v )d aub v + d aub v + h r 1 (X i ) ; communicaion graph (weigh T ). This erm capures he cos for all he edges a layer l. Lines 16 20: Here he algorihm finally compues he oal cos of he embedding he graph G by adding he cos of he edges beween he verices of las layer r, if any, when he verices of las layer are placed a X i. And compues he opimal cos of embedding E, C(E), by choosing he placemen of las layer which minimizes he overall cos (line 19 20). The vecor Z r sores he mapping of verices a layer r under he embedding E. Lines 21 23: Afer finding he opimal mapping for he verices a layer r he algorihm races back he corresponding opimal mapping for verices a layer r 1 all he way upo he firs layer. To simplify he descripion of he algorihm we do no show he fixed mapping of sources and sink ino he nework graph in Algorihm 2. I is easy o see ha o incorporae he fixed mapping of sources and sink he algorihm needs o only consider hose subses X i, Y j of V which map he source ω i (sink ν) o he corresponding source s i (sink ) while calculaing he daa srucures of he layer on which ω i is presen (layer r). A. Analysis of Algorihm 2 Theorem 5: Algorihm 2 solves MinCos and he ime complexiy of he algorihm is O(rn 2k ) when G is a layered graph wih r layers and a mos k nodes per layer. Proof: We prove he correcness of 2 when G is an unweighed graph and he processing coss are all zero. We also assume ha here are no edges beween he verices of a layer. The proof can easily be exended wih weigh funcions W and P. I is sufficien o show ha a each ieraion l he algorihm compues he opimal cos of embedding he compuaion graph ill layer l for all possible embeddings of every node of layer (l + 1) given ha he cos compued ill layer (l 1) is opimal. Le x i be a vecor of size 1 k whose l h elemen represens he assignmen of a nework node for ω li Ω. In oher words, x i is a k size subse of V. Le us define, d( x i, x i+1 ) := d aub v. (4) a u x i,b v x i+1 (ω iu,ω (i+1)v ) Γ This represens he sum of disance beween all he adjacen nodes in layer i and (i + 1) when he nodes of layer i are embedded o x i and nodes of layer (i + 1) are embedded o x i+1. Toal cos of any embedding can hen be wrien as C := r 1 d( x i, x i+1 ). To obain he opimal embedding i=1 we have o minimize he above equaion wih respec o all he possible mappings x i. Therefore ( he opimal cos can be r 1 ) wrien as C op = min x1,..., x r d( x i, x i+1 ). Separaing he erms wih x 1 and some algebraic manipulaion will give us ( r 1 C op = min g 1 ( x 2 ) + min x 2 x 2,..., x r ) d( x i, x i+1 ), (5) i=1 i=2 where, g 1 ( x 2 ) = min x1 d( x 1, x 2 ). Similarly afer minimizing wih respec o x l we can wrie he cos as, ( ) C op = min x l+1,..., x r g l ( x l+1 ) + r 1 i=l+1 d( x i, x i+1 ), (6) where g l ( x l+1 ) = min x l g l 1 ( x l ). Now i suffices o show ha he algorihm indeed calculaes g l a l h ieraion. Recall ha x l and x l+1 are k size subses of V which represen he mapping of nodes of layer l and l +1 respecively. In he algorihm, X i represens a k size subse of V o which he nodes of he layer

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange

More information

Chapter 7: Solving Trig Equations

Chapter 7: Solving Trig Equations Haberman MTH Secion I: The Trigonomeric Funcions Chaper 7: Solving Trig Equaions Le s sar by solving a couple of equaions ha involve he sine funcion EXAMPLE a: Solve he equaion sin( ) The inverse funcions

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

Dynamic Programming 11/8/2009. Weighted Interval Scheduling. Weighted Interval Scheduling. Unweighted Interval Scheduling: Review

Dynamic Programming 11/8/2009. Weighted Interval Scheduling. Weighted Interval Scheduling. Unweighted Interval Scheduling: Review //9 Algorihms Dynamic Programming - Weighed Ineral Scheduling Dynamic Programming Weighed ineral scheduling problem. Insance A se of n jobs. Job j sars a s j, finishes a f j, and has weigh or alue j. Two

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

EXERCISES FOR SECTION 1.5

EXERCISES FOR SECTION 1.5 1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

CHAPTER 12 DIRECT CURRENT CIRCUITS

CHAPTER 12 DIRECT CURRENT CIRCUITS CHAPTER 12 DIRECT CURRENT CIUITS DIRECT CURRENT CIUITS 257 12.1 RESISTORS IN SERIES AND IN PARALLEL When wo resisors are conneced ogeher as shown in Figure 12.1 we said ha hey are conneced in series. As

More information

Stability and Bifurcation in a Neural Network Model with Two Delays

Stability and Bifurcation in a Neural Network Model with Two Delays Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy

More information

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1. Roboics I April 11, 017 Exercise 1 he kinemaics of a 3R spaial robo is specified by he Denavi-Harenberg parameers in ab 1 i α i d i a i θ i 1 π/ L 1 0 1 0 0 L 3 0 0 L 3 3 able 1: able of DH parameers of

More information

Optimal Server Assignment in Multi-Server

Optimal Server Assignment in Multi-Server Opimal Server Assignmen in Muli-Server 1 Queueing Sysems wih Random Conneciviies Hassan Halabian, Suden Member, IEEE, Ioannis Lambadaris, Member, IEEE, arxiv:1112.1178v2 [mah.oc] 21 Jun 2013 Yannis Viniois,

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

8. Basic RL and RC Circuits

8. Basic RL and RC Circuits 8. Basic L and C Circuis This chaper deals wih he soluions of he responses of L and C circuis The analysis of C and L circuis leads o a linear differenial equaion This chaper covers he following opics

More information

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details! MAT 257, Handou 6: Ocober 7-2, 20. I. Assignmen. Finish reading Chaper 2 of Spiva, rereading earlier secions as necessary. handou and fill in some missing deails! II. Higher derivaives. Also, read his

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Longest Common Prefixes

Longest Common Prefixes Longes Common Prefixes The sandard ordering for srings is he lexicographical order. I is induced by an order over he alphabe. We will use he same symbols (,

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Lecture Notes 3: Quantitative Analysis in DSGE Models: New Keynesian Model

Lecture Notes 3: Quantitative Analysis in DSGE Models: New Keynesian Model Lecure Noes 3: Quaniaive Analysis in DSGE Models: New Keynesian Model Zhiwei Xu, Email: xuzhiwei@sju.edu.cn The moneary policy plays lile role in he basic moneary model wihou price sickiness. We now urn

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Written Exercise Sheet 5

Written Exercise Sheet 5 jian-jia.chen [ ] u-dormund.de lea.schoenberger [ ] u-dormund.de Exercise for he lecure Embedded Sysems Winersemeser 17/18 Wrien Exercise Shee 5 Hins: These assignmens will be discussed a E23 OH14, from

More information

Echocardiography Project and Finite Fourier Series

Echocardiography Project and Finite Fourier Series Echocardiography Projec and Finie Fourier Series 1 U M An echocardiagram is a plo of how a porion of he hear moves as he funcion of ime over he one or more hearbea cycles If he hearbea repeas iself every

More information

15. Vector Valued Functions

15. Vector Valued Functions 1. Vecor Valued Funcions Up o his poin, we have presened vecors wih consan componens, for example, 1, and,,4. However, we can allow he componens of a vecor o be funcions of a common variable. For example,

More information

Stopping Set Elimination for LDPC Codes

Stopping Set Elimination for LDPC Codes Sopping Se Eliminaion for LDPC Codes Anxiao (Andrew) Jiang, Pulakesh Upadhyaya, Ying Wang, Krishna R. Narayanan Hongchao Zhou, Jin Sima, and Jehoshua Bruck Compuer Science and Engineering Deparmen, Texas

More information

Computer-Aided Analysis of Electronic Circuits Course Notes 3

Computer-Aided Analysis of Electronic Circuits Course Notes 3 Gheorghe Asachi Technical Universiy of Iasi Faculy of Elecronics, Telecommunicaions and Informaion Technologies Compuer-Aided Analysis of Elecronic Circuis Course Noes 3 Bachelor: Telecommunicaion Technologies

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Continuous Time. Time-Domain System Analysis. Impulse Response. Impulse Response. Impulse Response. Impulse Response. ( t) + b 0.

Continuous Time. Time-Domain System Analysis. Impulse Response. Impulse Response. Impulse Response. Impulse Response. ( t) + b 0. Time-Domain Sysem Analysis Coninuous Time. J. Robers - All Righs Reserved. Edied by Dr. Rober Akl 1. J. Robers - All Righs Reserved. Edied by Dr. Rober Akl 2 Le a sysem be described by a 2 y ( ) + a 1

More information

Problem Set #1. i z. the complex propagation constant. For the characteristic impedance:

Problem Set #1. i z. the complex propagation constant. For the characteristic impedance: Problem Se # Problem : a) Using phasor noaion, calculae he volage and curren waves on a ransmission line by solving he wave equaion Assume ha R, L,, G are all non-zero and independen of frequency From

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

Math 334 Fall 2011 Homework 11 Solutions

Math 334 Fall 2011 Homework 11 Solutions Dec. 2, 2 Mah 334 Fall 2 Homework Soluions Basic Problem. Transform he following iniial value problem ino an iniial value problem for a sysem: u + p()u + q() u g(), u() u, u () v. () Soluion. Le v u. Then

More information

Approximation Algorithms for Unique Games via Orthogonal Separators

Approximation Algorithms for Unique Games via Orthogonal Separators Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define

More information

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion

More information

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is UNIT IMPULSE RESPONSE, UNIT STEP RESPONSE, STABILITY. Uni impulse funcion (Dirac dela funcion, dela funcion) rigorously defined is no sricly a funcion, bu disribuion (or measure), precise reamen requires

More information

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff Laplace ransfom: -ranslaion rule 8.03, Haynes Miller and Jeremy Orloff Inroducory example Consider he sysem ẋ + 3x = f(, where f is he inpu and x he response. We know is uni impulse response is 0 for

More information

SOLUTIONS TO ECE 3084

SOLUTIONS TO ECE 3084 SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no

More information

ψ(t) = V x (0)V x (t)

ψ(t) = V x (0)V x (t) .93 Home Work Se No. (Professor Sow-Hsin Chen Spring Term 5. Due March 7, 5. This problem concerns calculaions of analyical expressions for he self-inermediae scaering funcion (ISF of he es paricle in

More information

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013 IMPLICI AND INVERSE FUNCION HEOREMS PAUL SCHRIMPF 1 OCOBER 25, 213 UNIVERSIY OF BRIISH COLUMBIA ECONOMICS 526 We have exensively sudied how o solve sysems of linear equaions. We know how o check wheher

More information

Differential Equations

Differential Equations Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion

More information

A Hop Constrained Min-Sum Arborescence with Outage Costs

A Hop Constrained Min-Sum Arborescence with Outage Costs A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem

More information

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4.

Reading from Young & Freedman: For this topic, read sections 25.4 & 25.5, the introduction to chapter 26 and sections 26.1 to 26.2 & 26.4. PHY1 Elecriciy Topic 7 (Lecures 1 & 11) Elecric Circuis n his opic, we will cover: 1) Elecromoive Force (EMF) ) Series and parallel resisor combinaions 3) Kirchhoff s rules for circuis 4) Time dependence

More information

16 Max-Flow Algorithms

16 Max-Flow Algorithms A process canno be undersood by sopping i. Undersanding mus move wih he flow of he process, mus join i and flow wih i. The Firs Law of Mena, in Frank Herber s Dune (196) There s a difference beween knowing

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

IB Physics Kinematics Worksheet

IB Physics Kinematics Worksheet IB Physics Kinemaics Workshee Wrie full soluions and noes for muliple choice answers. Do no use a calculaor for muliple choice answers. 1. Which of he following is a correc definiion of average acceleraion?

More information

For example, the comb filter generated from. ( ) has a transfer function. e ) has L notches at ω = (2k+1)π/L and L peaks at ω = 2π k/l,

For example, the comb filter generated from. ( ) has a transfer function. e ) has L notches at ω = (2k+1)π/L and L peaks at ω = 2π k/l, Comb Filers The simple filers discussed so far are characeried eiher by a single passband and/or a single sopband There are applicaions where filers wih muliple passbands and sopbands are required The

More information

Orientation. Connections between network coding and stochastic network theory. Outline. Bruce Hajek. Multicast with lost packets

Orientation. Connections between network coding and stochastic network theory. Outline. Bruce Hajek. Multicast with lost packets Connecions beween nework coding and sochasic nework heory Bruce Hajek Orienaion On Thursday, Ralf Koeer discussed nework coding: coding wihin he nework Absrac: Randomly generaed coded informaion blocks

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Math 315: Linear Algebra Solutions to Assignment 6

Math 315: Linear Algebra Solutions to Assignment 6 Mah 35: Linear Algebra s o Assignmen 6 # Which of he following ses of vecors are bases for R 2? {2,, 3, }, {4,, 7, 8}, {,,, 3}, {3, 9, 4, 2}. Explain your answer. To generae he whole R 2, wo linearly independen

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Pade and Laguerre Approximations Applied. to the Active Queue Management Model. of Internet Protocol

Pade and Laguerre Approximations Applied. to the Active Queue Management Model. of Internet Protocol Applied Mahemaical Sciences, Vol. 7, 013, no. 16, 663-673 HIKARI Ld, www.m-hikari.com hp://dx.doi.org/10.1988/ams.013.39499 Pade and Laguerre Approximaions Applied o he Acive Queue Managemen Model of Inerne

More information

Scheduling of Crude Oil Movements at Refinery Front-end

Scheduling of Crude Oil Movements at Refinery Front-end Scheduling of Crude Oil Movemens a Refinery Fron-end Ramkumar Karuppiah and Ignacio Grossmann Carnegie Mellon Universiy ExxonMobil Case Sudy: Dr. Kevin Furman Enerprise-wide Opimizaion Projec March 15,

More information

Comments on Window-Constrained Scheduling

Comments on Window-Constrained Scheduling Commens on Window-Consrained Scheduling Richard Wes Member, IEEE and Yuing Zhang Absrac This shor repor clarifies he behavior of DWCS wih respec o Theorem 3 in our previously published paper [1], and describes

More information

Lecture 23: I. Data Dependence II. Dependence Testing: Formulation III. Dependence Testers IV. Loop Parallelization V.

Lecture 23: I. Data Dependence II. Dependence Testing: Formulation III. Dependence Testers IV. Loop Parallelization V. Lecure 23: Array Dependence Analysis & Parallelizaion I. Daa Dependence II. Dependence Tesing: Formulaion III. Dependence Tesers IV. Loop Parallelizaion V. Loop Inerchange [ALSU 11.6, 11.7.8] Phillip B.

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov Saionary Disribuion Design and Analysis of Algorihms Andrei Bulaov Algorihms Markov Chains 34-2 Classificaion of Saes k By P we denoe he (i,j)-enry of i, j Sae is accessible from sae if 0 for some k 0

More information

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities: Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model 1 Boolean and Vecor Space Rerieval Models Many slides in his secion are adaped from Prof. Joydeep Ghosh (UT ECE) who in urn adaped hem from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) Rerieval

More information

Expert Advice for Amateurs

Expert Advice for Amateurs Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he

More information

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n Module Fick s laws of diffusion Fick s laws of diffusion and hin film soluion Adolf Fick (1855) proposed: d J α d d d J (mole/m s) flu (m /s) diffusion coefficien and (mole/m 3 ) concenraion of ions, aoms

More information

THE MATRIX-TREE THEOREM

THE MATRIX-TREE THEOREM THE MATRIX-TREE THEOREM 1 The Marix-Tree Theorem. The Marix-Tree Theorem is a formula for he number of spanning rees of a graph in erms of he deerminan of a cerain marix. We begin wih he necessary graph-heoreical

More information

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu

11!Hí MATHEMATICS : ERDŐS AND ULAM PROC. N. A. S. of decomposiion, properly speaking) conradics he possibiliy of defining a counably addiive real-valu ON EQUATIONS WITH SETS AS UNKNOWNS BY PAUL ERDŐS AND S. ULAM DEPARTMENT OF MATHEMATICS, UNIVERSITY OF COLORADO, BOULDER Communicaed May 27, 1968 We shall presen here a number of resuls in se heory concerning

More information

EE650R: Reliability Physics of Nanoelectronic Devices Lecture 9:

EE650R: Reliability Physics of Nanoelectronic Devices Lecture 9: EE65R: Reliabiliy Physics of anoelecronic Devices Lecure 9: Feaures of Time-Dependen BTI Degradaion Dae: Sep. 9, 6 Classnoe Lufe Siddique Review Animesh Daa 9. Background/Review: BTI is observed when he

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

Algebraic Attacks on Summation Generators

Algebraic Attacks on Summation Generators Algebraic Aacks on Summaion Generaors Dong Hoon Lee, Jaeheon Kim, Jin Hong, Jae Woo Han, and Dukjae Moon Naional Securiy Research Insiue 161 Gajeong-dong, Yuseong-gu, Daejeon, 305-350, Korea {dlee,jaeheon,jinhong,jwhan,djmoon}@eri.re.kr

More information

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1

SZG Macro 2011 Lecture 3: Dynamic Programming. SZG macro 2011 lecture 3 1 SZG Macro 2011 Lecure 3: Dynamic Programming SZG macro 2011 lecure 3 1 Background Our previous discussion of opimal consumpion over ime and of opimal capial accumulaion sugges sudying he general decision

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

ADDITIONAL PROBLEMS (a) Find the Fourier transform of the half-cosine pulse shown in Fig. 2.40(a). Additional Problems 91

ADDITIONAL PROBLEMS (a) Find the Fourier transform of the half-cosine pulse shown in Fig. 2.40(a). Additional Problems 91 ddiional Problems 9 n inverse relaionship exiss beween he ime-domain and freuency-domain descripions of a signal. Whenever an operaion is performed on he waveform of a signal in he ime domain, a corresponding

More information

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n Lecure 3 - Kövari-Sós-Turán Theorem Jacques Versraëe jacques@ucsd.edu We jus finished he Erdős-Sone Theorem, and ex(n, F ) ( /(χ(f ) )) ( n 2). So we have asympoics when χ(f ) 3 bu no when χ(f ) = 2 i.e.

More information

Lecture #8 Redfield theory of NMR relaxation

Lecture #8 Redfield theory of NMR relaxation Lecure #8 Redfield heory of NMR relaxaion Topics The ineracion frame of reference Perurbaion heory The Maser Equaion Handous and Reading assignmens van de Ven, Chapers 6.2. Kowalewski, Chaper 4. Abragam

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information