VLSI Circuit Performance Optimization by Geometric Programming

Size: px
Start display at page:

Download "VLSI Circuit Performance Optimization by Geometric Programming"

Transcription

1 Annals of Operatons Research 05, 37 60, Kluwer Academc Publshers. Manufactured n The Netherlands. VLSI Crcut Performance Optmzaton by Geometrc Programmng CHRIS CHU cnchu@astate.edu Department of Electrcal and Computer Engneerng, Iowa State Unversty, Ames, IA 500, USA D.F. WONG wong@cs.utexas.edu Department of Computer Scences, Unversty of Texas at Austn, Austn, TX 7872, USA Abstract. Delay of VLSI crcut components can be controlled by varyng ther szes. In other words, performance of VLSI crcuts can be optmzed by changng the szes of the crcut components. In ths paper, we defne a specal type of geometrc program called unary geometrc program. We show that under the Elmore delay model, several commonly used formulatons of the crcut component szng problem consderng delay, chp area and power dsspaton can be reduced to unary geometrc programs. We present a greedy algorthm to solve unary geometrc programs optmally and effcently. When appled to VLSI crcut component szng, we prove that the runtme of the greedy algorthm s lnear to the number of components n the crcut. In practce, we demonstrate that our unary-geometrc-program based approach for crcut szng s hundreds of tmes or more faster than other approaches. Keywords: VLSI desgn, unary geometrc programmng, crcut performance optmzaton, transstor szng, gate szng, wre szng, Lagrangan relaxaton. Introducton Snce the nventon of ntegrated crcuts almost 40 years ago, szng of crcut components (e.g., transstors and wre segments) has always been an effectve technque to acheve desrable crcut performance. The reason s both resstance and capactance of a crcut component are functons of the component sze. Snce the delay of a crcut component can be modeled as a product of the resstance of the component and the capactance of the subcrcut drven by the component, the delay of a crcut can be mnmzed by szng of crcut components. Both transstor/gate szng [6,2,5,6,2] and wre szng [2,4,7,9,8,20] have been shown to be effectve to reduce crcut delay. Transstor szng s the problem of changng the channel length of transstors. Gate szng s bascally the same as transstor szng. Gate s a collecton of transstors workng together to perform a specfc logc functon. Gate szng refers to the problem of szng the transstors nsde a gate smultaneously by the same factor. Wre szng refers to the problem of determnng the wdth of the wres at every pont along the wres. To make the desgn and fabrcaton process eas- Ths work was partally supported by the Texas Advanced Research Program under Grant No and by a grant from the Intel Corporaton.

2 38 CHU AND WONG er, wres are usually dvded nto fxed-length segments and every pont n a segment s szed to the same wdth. Snce transstor/gate szes affect wre-szng solutons and wre szes affect transstor/gate-szng solutons, t s benefcal to smultaneously sze both transstors/gates and wres [2,7,8,7,9]. However, the smultaneous problem s harder to solve. In ths paper, we consder performance optmzaton of VLSI crcuts by szng components. In order to smplfy the presentaton, we llustrate the dea by smultaneous gate and wre szng. All technques ntroduced n ths paper can be easly appled to smultaneous transstor and wre szng. The wdely used Elmore delay model [] s used here for delay calculaton. Varous formulatons of the szng problem consderng delay, chp area and power dsspaton have been proposed. When delay alone s consdered, two commonly used formulatons are mnmzng a weghted sum of the delay of components, and mnmzng the maxmum delay among all crcut outputs. Besdes delay, t s desrable to mnmze the chp area occuped by the crcut and the power dsspaton of the crcut as well. All these objectves can be optmzed effectvely by crcut component szng. However, these objectves are usually conflctng. As a result, to consder the tradeoff of these desgn objectves, formulatons lke mnmzng the maxmum delay among all crcut outputs subject to bounds on area/power, and mnmzng area/power subject to delay bounds on all crcut outputs have been proposed. Fshburn and Dunlop [2] have already ponted out that for the transstor szng problem, several formulatons can be wrtten as geometrc programs [0]. In fact, by generalzng the dea of [2], t s not dffcult to see that all formulatons lsted above can be wrtten as geometrc programs. However, t would be very slow to solve them by some general-purpose geometrc programmng solver. So nstead of solvng t exactly, many heurstcs were proposed [6,2,5,6]. Sapatnekar et al. [2] transformed the geometrc programs for transstor szng nto convex programs and solved them by a sophstcated general-purpose convex programmng solver based on nteror pont method. Ths s the best known prevous algorthm that can guarantee exact transstor szng solutons. However, to optmze a crcut of only 832 transstors, the reported runtme s already 9 hours on a Sun SPARCstaton. In ths paper, we defne a specal type of posynomal [0] and geometrc program: Defnton. A unary posynomal s a posynomal of the followng form: u(x,...,x n ) = n α + x n β x +,jn where α, β and γ j for all and j are non-negatve constants. γ j x x j, Defnton 2. A unary geometrc program s a geometrc program whch mnmzes a

3 VLSI CIRCUIT PERFORMANCE 39 unary posynomal subject to upper and lower bounds on all varables. In other words, t s a geometrc program of the followng form: Mnmze u(x,...,x n ) = α + β x + x γ j x x j n n,jn subject to L x U for all n, where α, β, γ j, L and U for all and j are non-negatve constants. As we show n secton 2, the formulaton of mnmzng weghted component delay s a unary geometrc program. For all other formulatons, Chen, Chu and Wong [3] showed that they can be reduced by the Lagrangan relaxaton technque to problems very smlar to weghted component delay problems. We observe that these problems are also unary geometrc programs. In other words, by solvng unary geometrc programs, all formulatons of crcut component szng above can be solved. To solve unary geometrc programs, we present a greedy algorthm whch s both optmal and effcent. In partcular, for unary geometrc programs correspondng to VLSI crcut component szng, we prove that the runtme of the greedy algorthm s lnear to the number of components n the crcut. The rest of ths paper s organzed as follows. In secton 2, we explan why dfferent formulatons of the crcut szng problem can be reduced to unary geometrc programs. In secton 3, we present the greedy algorthm to solve unary geometrc programs, prove ts optmalty and analyze ts convergence rate. In secton 4, we analyze the runtme of the greedy algorthm when appled to VLSI crcuts. In secton 5, expermental results to show the runtme and storage requrements of our approach to crcut szng are presented. 2. Reducton to unary geometrc programs In ths secton, we show that any formulaton of crcut component szng wth one of delay, area and power consttutng the objectve functon and wth constrants on the other two can be reduced to unary geometrc programs. For a general VLSI crcut, we can gnore all latches and optmze ts combnatonal subcrcuts. Therefore, we focus on combnatonal crcuts below. Fgure llustrates a combnatonal crcut. We call a gate or a wre segment a crcut component. Let n be the number of component n the crcut. The crcut component szng problem s to optmze some objectve functon subject to some constrants nvolvng delay, area and power. For n, letx be the gate sze f component s a gate, or the segment wdth f component s a wre segment. Let L and U be, respectvely, the lower bound and upper bound on the component sze x,.e., L x U. In secton 2., we frst ntroduce the model that we use for delay calculaton. In secton 2.2, we show that the formulaton of mnmzng a weghted sum of component delays can be wrtten drectly as a unary geometrc program. In secton 2.3, we show that all other formulatons can be reduced to unary geometrc programs.

4 40 CHU AND WONG Fgure. A combnatonal crcut. Fgure 2. The model of a gate by a swtch-level RC crcut. Note that r = r /x and c = ĉ x + f,where r, ĉ and f are the unt sze output resstance, the unt sze gate area capactance and the gate permeter capactance of the gate respectvely. Although the gate shown here s a 2-nput AND gate, the model can be easly generalzed for any gate wth any number of nput pns. 2.. Delay model For the purpose of delay calculaton, we model crcut components as RC crcuts. A gate s modeled as a swtch-level RC crcut as shown n fgure 2. See [22] for a reference of ths model. For ths model, the output resstance r = r /x, and the nput capactance of a pn c = ĉ x + f,where r, ĉ and f are the unt sze output resstance, the unt sze gate area capactance and the gate permeter capactance of gate respectvely. (To smplfy the notatons, we assume the nput capactances of all nput pns of a gate are the same. We also gnore the ntrnsc gate delay. It s clear that all our results wll stll hold wthout these assumptons.) A wre segment s modeled as a π-type RC crcut as shown n fgure 3. For ths model, the segment resstance r = r /x, and the segment capactance c = ĉ x + f, where r, ĉ and f are the unt wdth wre resstance, the unt wdth wre area capactance and the wre frngng capactance of segment respectvely. The classcal Elmore delay model [] s used for delay calculaton. The delay of each component s equal to the delay assocated wth ts resstor. The delay assocated wth a resstor s equal to ts resstance tmes ts downstream capactance,.e., the total capactance drven by the resstor. The delay along a sgnal path s the sum of the delays assocated wth the resstors n the path.

5 VLSI CIRCUIT PERFORMANCE 4 Fgure 3. The model of a wre segment by a π-type RC crcut. Note that r = r /x and c = ĉ x + f, where r, ĉ and f are the unt wdth wre resstance, the unt wdth wre area capactance and the wre frngng capactance of the segment respectvely Weghted component delay formulaton In ths subsecton, we show that the problem of mnmzng a weghted sum of the delays of components can be wrtten drectly as a unary geometrc program. Accordng to secton 2., the capactance of component s a lnear functon n x and the capactances of output loads are constants. So the downstream capactance of each component s a lnear functon n x,...,x n. For example, the downstream capactance of component 6 s (ĉ 6 x 6 + f 6 )/2 + (ĉ 9 x 9 + f 9 ) + C L. Snce the resstance of component s nversely proportonal to x and the resstance of drvers are constants, the delay assocated wth each component n the crcut can be wrtten as a unary posynomal n x,...,x n plus a constant. For example, Delay of component 6 = r ) 6 (ĉ6 x 6 + f 6 + (ĉ 9 x 9 + f 9 ) + C L x 6 2 = r 6(f 6 /2 + f 9 + C L) x 9 + r 6 ĉ 9 + r 6ĉ 6 x 6 x 6 2. It s clear that a weghted sum of the component delays s also a unary posynomal plus a constant. Together wth upper and lower bounds on component szes, the weghted component delay formulaton can be wrtten as a unary geometrc program Other formulatons Chen, Chu and Wong [3] showed that the Lagrangan relaxaton technque can be used to handle dfferent formulatons of crcut component szng. Lagrangan relaxaton s a general technque for solvng constraned optmzaton problems. In Lagrangan relaxaton, troublesome constrants are relaxed and ncorporated nto the objectve functon after multplyng them by constants called Lagrange multplers, one multpler for each constrant. For any fxed vector λ of the Lagrange multplers ntroduced, we have a new optmzaton problem (whch should be easer to solve because t s free of troublesome constrants) called the Lagrangan relaxaton subproblem. Chen, Chu and Wong [3] showed that there exsts a vector λ such that the optmal soluton of the Lagrangan relaxaton subproblem s also the optmal soluton of the orgnal crcut component szng problem. The problem of fndng such a vector λ s called the Lagrangan dual problem. The Lagrangan dual problem can be solved by the

6 42 CHU AND WONG classcal method of subgradent optmzaton []. Therefore, f the Lagrangan relaxaton subproblem can be solved optmally, the orgnal crcut szng problem can also be solved optmally. For all formulatons stated n secton, the correspondng Lagrangan relaxaton subproblems are ndeed very smlar to a weghted component delay problem. No matter whch of delay, area or power s n the objectve functon and whch are n the constrants, after ncorporatng the constrants nto the objectve functon, the resultng objectve functon s always a weghted sum of total component area, total power dsspaton, component delays and nput drver delays. The total component area, the total power dsspaton and the nput drver delays are all lnear functons n x,...,x n. It s obvous for the case of area. For power dsspaton, power s dsspated manly when chargng and dschargng capactances n the crcut. Power dsspaton s a lnear functon n the capactances of components. Snce the capactance of component s lnear to ts sze x, the total power dsspaton s a lnear functon n x,...,x n. For nput drver delays, note that the resstance of each nput drver s a constant and the total capactance drven by each nput drver s a lnear functon n x,...,x n. So the delay assocated wth each nput drver s a lnear functon n x,...,x n. As a result, for any formulaton consdered, the objectve functon of the Lagrangan relaxaton subproblem s a unary geometrc functon. Together wth the upper and lower bounds on component szes, the Lagrangan relaxaton subproblem s a unary geometrc program. 3. Greedy algorthm for solvng unary geometrc program In ths secton, we present a greedy algorthm whch can solve unary geometrc programs very effcently and optmally. In secton 3., we present the greedy algorthm. In secton 3.2, we prove that f we use (x,...,x n ) = (L,...,L n ) as the startng soluton, the algorthm always converges to the optmal soluton. In secton 3.3, we prove that f (α 0orβ 0) for all, then the greedy algorthm wll converge lnearly to the optmal soluton from any startng soluton. 3.. The greedy algorthm The basc dea of the greedy algorthm s to teratvely adjustng the varables. In each teraton, the varables are examned one by one. When x k s examned, t s adjusted optmally whle keepng the values of all other varables fxed. We call ths operaton an optmal local adjustment of x k. The followng lemma gves a formula for optmal local adjustment.

7 VLSI CIRCUIT PERFORMANCE 43 Lemma. For a soluton x = (x,x 2,...,x n ) of a unary geometrc program, the optmal local adjustment of x k s gven by { { n, k x k = mn U k, max L k, γ }} kx + α k jn, j k γ. kj /x j + β k Proof. u(x,...,x n ) = α + β x + x n = x k ( n, k n,jn γ k x + α k ) + x k ( + terms ndependent of x k. γ j x x j jn, j k ) γ kj + β k x j So by the Kuhn Tucker condtons [3], the optmal value of x k between L k and U k whch mnmze u(x,...,x n ) s { { n, k x k = mn U k, max L k, γ }} kx + α k jn, j k γ. kj /x j + β k The greedy algorthm s gven below. Greedy algorthm for unary geometrc program. S. Let (x,...,x n ) be some startng soluton. S2. for k := ton do { { n, k x k = mn U k, max L k, γ }} kx + α k jn, j k γ. kj /x j + β k S3. Repeat step S2 untl no mprovement Optmalty of the greedy algorthm In ths subsecton, we prove that f we use (x,...,x n ) = (L,...,L n ) as the startng soluton, the algorthm always converges to the optmal soluton. Let x = (x,...,x n ), A k (x) = γ k x, B k (x) = γ kj. x j n, k jn, j k Note that u(x) s a posynomal n x. It s well known that under a varable transformaton, a posynomal s equvalent to a convex functon. So u(x) has a unque global mnmum

8 44 CHU AND WONG and no other local mnmum. We show n the followng two lemmas that wth the startng soluton (x,...,x n ) = (L,...,L n ), the greedy algorthm always converges to the global mnmum. Lemma 2. If the greedy algorthm converges, then the soluton s optmal. Proof. Suppose the algorthm converges to x = (x,...,x n ). Then for k n, by lemma, { xk {U = mn k, max L k, A k (x ) + α k B k (x ) + β k }}. Note that u(x) s a posynomal n x, and that under the transformaton x k = e z k for k n, the functon h(z) = u(e z,...,e z n ) s convex over z ={z: L k e z k U k, k n}. Letz = (z,...,z n ) where x k = ez k for k n. We now consder 3 cases: Case : xk = (A k (x ) + α k )/(B k (x ) + β k ). In ths case, we have u x k (x ) = 0. Thus h (z ) = u (x ) x k (z ) = u (x )e z k = 0. z k x k z k x k Case 2: xk = L k. In ths case, L k (A k (x ) + α k )/(B k (x ) + β k ). We have u x k (x ) 0and z k zk 0, z z. Hence h (z )(z k zk z ) = u (x )e z k (zk zk k x ) 0, z z. k Case 3: xk = U k. In ths case, U k (A k (x ) + α k )/(B k (x ) + β k ). We have u x k (x ) 0and z k zk 0, z z. Hence h (z )(z k zk z ) = u (x )e z k (zk zk k x ) 0, z z. k So h z k (z )(z k zk ) 0forallk and for all z z. Thus for any feasble soluton x, u(x) u(x ) = h(z) h(z ) h(z )(z z ) as h s convex = n k= 0. h z k (z )(z k z k ) Therefore x s the global mnmum pont.

9 VLSI CIRCUIT PERFORMANCE 45 Lemma 3. If (x,...,x n ) = (L,...,L n ) s used as the startng soluton, the greedy algorthm always converges. Proof. For any two vectors x and y, weusex y to denote that x y for all. Consder any two feasble solutons x and y. Letx and y be the solutons after locally adjustng some varable x k of x and y, respectvely. If x y, thena k (x) A k (y) and B k (x) B k (y). So { }} x k {U = mn A k (x) + α k k, max L k, B k (x) + β k { { mn U k, max L k, A k (y) + α k B k (y) + β k }} = y k. Also, x j = x j y j = y j for j k. Hence x y mples x y. If we consder x and y be solutons before two consecutve optmal local adjustment operatons, then x = y. Therefore, x x = y y. Snce the startng soluton s (L,...,L n ), we can prove by mathematcal nducton that all varables are monotoncally ncreasng after each optmal local adjustment operaton. If we consder y to be the optmal soluton, then y = y. Hence x y mples x y. Snce the startng soluton s (L,...,L n ), we can prove by mathematcal nducton that every x after each optmal local adjustment operaton s upper bounded by the optmal value y. As x s monotoncally ncreasng and s upper bounded for all, the greedy algorthm always converges. By lemmas 2 and 3, we have the followng theorem. Theorem. For any unary geometrc program, f (x,...,x n ) = (L,...,L n ) s used as the startng soluton, the greedy algorthm always converges to the optmal soluton Convergence rate of the greedy algorthm In secton 2, we show that many formulatons of the crcut szng problem can be reduced to a sequence of unary geometrc programs by Lagrangan relaxaton. We prove n secton 3.2 the convergence of the greedy algorthm only for the specal startng soluton x = (L,...,L n ). So n order to guarantee convergence, before solvng each unary geometrc program nstance, all varables have to be reset to ther lower bounds to form the startng soluton for the greedy algorthm. However, snce two consecutve unary geometrc program nstances by Lagrangan relaxaton are almost the same (except that the Lagrange multplers are changed by a lttle bt), the optmal soluton of the frst unary geometrc program s close to the optmal soluton of the second one, and hence a good startng soluton to the second one. So f we can guarantee convergence to the optmal soluton, t s better not to reset the soluton before solvng each unary

10 46 CHU AND WONG geometrc program nstance. We observe that not resetng can speed up the greedy algorthm by more than 50% n practce. In addton, even for the specal startng soluton, the convergence rate of the greedy algorthm s not known. In ths subsecton, we consder unary geometry programs satsfyng the condton that α 0orβ 0forall. We pont out n secton 4 that for VLSI crcut component szng problems, ths condton s essentally always true. Under ths condton, we prove that the greedy algorthm always converges to the optmal soluton for any startng soluton. Moreover, we prove that the convergence rate for any startng soluton s always lnear wth convergence rato upper bounded by the parameter σ defned as follows: σ = max kn { φk + θ k 2 }, where φ k = / ( + ) α k A k (U,...,U n ) and θ k = / ( + ) β k. B k (L,...,L n ) Note that for all k, at least one of α k and β k s postve. So at least one of φ k and θ k s less than. Therefore, t s clear that σ s a constant such that 0 < σ <. Lemma 4 gves bounds on the changes of the varables after each teraton of the 2,...,x(0) n ) be the startng soluton, and for t, let x (t) = (x (t),x(t) 2,...,x(t) n ) be the soluton just after the tth teraton of the greedy algorthm. Let = max n {(U L )/L }. greedy algorthm. Let x (0) = (x (0),x(0) Lemma 4. For any t 0, x(t+) + σ t x (t) + σ t for all. The proof of lemma 4 s gven n the appendx. Theorem 2. If α 0orβ 0forall, then the greedy algorthm always converges to the optmal soluton for any startng soluton. Proof. Snce 0 <σ<, + σ t ast. So by lemma 4, t s obvous that the greedy algorthm always converges for any startng soluton. Lemma 2 proves that f the greedy converges, then the soluton s optmal. So the theorem follows. Let x = (x,x 2,...,x n ) be the optmal soluton. The followng lemma proves that the convergence rate of the greedy algorthm s lnear wth convergence rato upper bounded by σ.

11 VLSI CIRCUIT PERFORMANCE 47 Lemma 5. For any t 0, for all. x x (t) x ( + )σ t σ Proof. For any t 0andforany, Case : ( + )σ t /( σ). Then x (t) x Smlarly, we can prove U t ( + )σ + + L σ x (t) x Case 2: ( + )σ t /( σ) <. Then + ( + )σ t /( σ). x (t) x = k=t x (k) x (k+) So by lemma 4, /P x (t) /x P where P = k=t ( + σ k ). ln P = ln ( + σ k) = = = k=t k=t k=t ( σ k 2 2 σ 2k σ 3k ) 4 4 σ 4k + ( ) j j σ jk j= j= j= = ln j j j j j j j= ( ) σ j k k=t σ jt σ j. σ jt ( σ) j (2) σ t /( σ), (3). ()

12 48 CHU AND WONG where () s because ln( + x) = x 2 x2 + 3 x3 4 x4 +, (2) s because 0 <σ<, whch mples 0 <( σ) j < σ < σ j for j, and (3) s because 0 < σ t /( σ) <( + )σ t /( σ) < andln x = x + 2 x2 + 3 x3 + f 0 <x<. So P σ t /( σ) = + σ t /( σ t ) σ σ + σ t /( ) σ + t ( + )σ = +. σ Hence Therefore for both cases, + ( + )σ t /( σ) + ( + )σ t /( σ) It s easy to see that t ( + )σ σ So for any t 0 and for all, x x (t) x x(t) x x(t) x t ( + )σ +. σ t ( + )σ +. σ + ( + )σ t /( σ). t ( + )σ. σ 4. Analyss of the greedy algorthm when appled to VLSI crcuts In ths secton, we analyze the runtme of the greedy algorthm when appled to VLSI crcuts. In secton 4., we show that for VLSI crcuts, each teraton of the greedy algorthm only takes tme lnear to the number of crcut components. In secton 4.2, we show that for VLSI crcuts, the condton α 0orβ 0forall n secton 3.3 s always true. So we conclude that for any crcut component szng formulaton, the Lagrangan relaxaton subproblem can be solved optmally by the greedy algorthm n tme lnear to the sze of the crcut. Notce that Chu and Wong [5] also showed that the runtme of a smlar greedy algorthm for wrng szng of a sngle nterconnect tree s lnear.

13 VLSI CIRCUIT PERFORMANCE Lnear tme for each teraton We frst show that when appled to VLSI crcuts, each teraton of the greedy algorthm only takes lnear tme. For each optmal local adjustment operaton of x k, we need to calculated A k (x) = n, k γ k x and B k (x) = jn, j k γ kj x j. Hence each optmal local adjustment operaton takes O(n) tme and each teraton takes O(n 2 ) tme n general. However, for VLSI crcuts, A k (x) s and B k (x) s can be computed ncrementally. The reason s for any component k, A k (x) s a weghted downstream capactance and B k (x) s a weghted upstream resstance of the component. So A k (x) can be computed easly by fndng a weghted sum of A j (x) over all component j at the output of component k. Smlarly, B k (x) can be computed easly by fndng a weghted sum of B j (x) over all component j at the nput of component k. Note that the number of nputs and number of outputs for VLSI crcuts are always bounded by a smaller constant n practce. If we perform the optmal local adjustment operatons n a topologcal order, for each k, both A k (x) and B k (x) can be computed n constant tme. Therefore, the optmal local adjustment of x k can be done n constant tme. As a result, each teraton of the greedy algorthm only takes lnear tme Convergence rato of the greedy algorthm In secton 3.3, we prove that the greedy algorthm converges lnearly wth convergence rato σ to the optmal soluton from any startng soluton. The convergence rato σ s upper bounded by the maxmum of (φ k +θ k )/2 among all k. So f both φ k = andθ k = for some k, then the proof cannot guarantee the convergence of the greedy algorthm. Ths stuaton occurs when α k = 0andβ k = 0forsomek. On the other hand, f α k 0 or β k 0forallk, thenσ s less than. The convergence of the greedy algorthm s guaranteed. Moreover, the larger the values of α k s and β k s, the faster the convergence of the greedy algorthm. For all k, α k and β k are, respectvely, the coeffcents of the terms /x k and x k n the objectve functon of the unary geometrc program. For VLSI crcut component szng, α k s and β k s are essentally always non-zero. Factors causng α k s and β k s to be greater than zero are lsted below. Wre frngng capactance and gate permeter capactance. The Elmore delay for a component s equal to ts resstance tmes ts downstream capactance. Notce that wre frngng capactance and gate permeter capactance are ndependent of the component szes, whereas the resstance of any component s nversely proportonal to ts sze. So the wre frngng capactance of all wre segments and the gate permeter capactance of all gates/loads at the downstream of component k contrbute to the value of α k.

14 50 CHU AND WONG Drver resstance and load capactance. The Elmore delay for a drver equals to the drver resstance tmes the total capactance of wre segments and gates drven by t. Snce the drver resstance s ndependent of x,...,x n, and the total capactance of wre segments and gates drven s a lnear functon of x,...,x n, the drver resstance contrbutes to the value of β k for all component k drven by the drver. Smlarly, f any component k s n the upstream of a load wth capactance C L, then the term C L r k /x k wll occur n the Elmore delay expresson. Therefore, the load capactance contrbutes to the value of α k. Component area. For any VLSI crcut szng formulaton nvolvng the total component area, β k 0forallk. Let the total component area be n = ω x for some postve constants ω,...,ω n. If the total component area s the objectve to mnmze, then the objectve functon of the unary geometrc program wll contan the term n = ω x.if the total component area s constraned, then after Lagrangan relaxaton, the objectve functon of the unary geometrc program wll contan the term λ( n = ω x ) where λ s the Lagrange multpler. In both cases, β k 0forallk. Power dsspaton. As stated n secton 2.3, the power dsspaton of a crcut s a lnear functon n = ω x for some postve constants ω,...,ω n. So f power dsspaton s consdered ether n the objectve or as a constrant, β k 0forallk. In fact, the number of teratons of the greedy algorthm s a functon of the convergence rato σ.thevalueofσ depends on a lot of factors lke the electrcal parameters of the fabrcaton technology, the resstance of drvers and the capactance of loads, and the upper and lower bounds on the component szes. However, we observe that the actual convergence rato s not very senstve to those factors, and s usually much less than 0. n practce. In addton, the change n σ does not affect the number of teratons very much. For example, f σ changes from 0.05 to an unrealstcally large value 0.5, the number of teratons s ncreased only by a factor of log 0.05/ log 0.5 = 4.3. Snce the convergence rate of the greedy algorthm s lnear and the runtme of each teraton s O(n), we have the followng theorem. Theorem 3. When appled to VLSI crcut component szng, the total runtme of the greedy algorthm for any startng soluton s O(n log(/ε)), whereε specfes the precson of the fnal soluton (.e., for the optmal soluton x, the fnal soluton x satsfes (x x )/x ε for all ). Proof. By lemma 5, for any t 0 and for all, x x (t) x t ( + )σ. σ

15 VLSI CIRCUIT PERFORMANCE 5 In order to guarantee that (x x (t) )/x ε for all, the number of teratons t must satsfy ( + )σ t ε, σ or equvalently, ( + ) t log /σ ( σ)ε. In other words, at most ( + ) log /σ ( σ)ε teratons are enough. Snce each teraton of the greedy algorthm takes O(n) tme, the total runtme s O(n log(/ε)). Therefore, to obtan a soluton wth any fxed precson, only a constant number of teratons of the greedy algorthm are needed. Ths mples that for Lagrangan relaxaton subproblems of VLSI crcut component szng, the runtme of the greedy algorthm s O(n). 5. Expermental results In ths secton, the runtme and storage requrements of our unary-geometrc-program based approach to crcut component szng are presented. We mplemented a crcut component szng program for mnmzng area subject to maxmum delay bound on a PC wth a 333 MHz Pentum II processor. In the program, the Lagrangan relaxaton technque s used to reduce the problem to unary geometrc programs, whch are then solved optmally by our greedy algorthm. The Lagrangan dual problem s solved by the classcal subgradent optmzaton method. We test our crcut component szng program on adders [4] of dfferent szes rangng from 8 bts to 024 bts. Number of gates range from 20 to Number of wres range from 96 to 2288 (note that the number of wres here means the number of szable wre segments). The total number of szable components range from 26 to The lower bound and upper bound of the sze of each gate are and 00, respectvely. The lower bound and upper bound of the wdth of each wre are and 3 µm, respectvely. The stoppng crtera of our program s the soluton s wthn % of the optmal soluton. In table, the runtme and storage requrements of our program are shown. Even for a crcut wth szable components, the runtme and storage requrements of our program are.53 mnutes and about 23 MB only. As mentoned n secton, the nteror-pont-method based approach n [2] s the best prevous algorthm that can guarantee exact crcut szng solutons. The largest test crcut n [2] has 832 transstors and the reported runtme and memory are 9 hours (on a Sun SPARCstaton ) and

16 52 CHU AND WONG Table The runtme and storage requrements of our crcut component szng program on test crcuts of dfferent szes. Crcut name Crcut sze Runtme Memory # Gates # Wres Total (mnutes) (MB) adder (8 bts) adder (6 bts) adder (32 bts) adder (64 bts) adder (28 bts) adder (256 bts) adder (52 bts) adder (024 bts) MB, respectvely. Note that for a problem of smlar sze (864), our approach only needs.3 seconds of runtme (on a PC wth a 333 MHz Pentum II processor) and.5 MB of memory. Accordng to the SPEC benchmark results [23], our machne s roughly 40 tmes faster than the slowest model of Sun SPARCstaton. Takng the speed dfference of the machnes nto account, our program s about 600 tmes faster than the nteror-pont-method based approach for a small crcut. For larger crcuts, we expect the speedup to be even more sgnfcant. Fgures 4 and 5 plot the runtme and storage requrements of our program. By performng a lnear regresson on the logarthm of the data n fgure 4, we fnd that the emprcal runtme of our program s about O(n.7 ). Fgure 5 shows that the rato of the storage versus the crcut sze of our program s close to lnear. The storage requrement for each szable component s about 0.8 KB. The basc dea of the subgradent optmzaton method s to repeatedly modfy the vector of Lagrange multplers accordng to the subgradent drecton and then solve the correspondng Lagrangan relaxaton subproblem untl the soluton converges. Fgure 6 shows the convergence sequence of the subgradent optmzaton method for the Lagrangan dual problem on a 28-bt adder. It shows that our program converges smoothly to the optmal soluton. The sold lne represents the upper bound of the optmal soluton and the dotted lne represents the lower bound of t. The lower bound values come from the optmal value of the unary geometrc program at current teraton. Note that the optmal soluton s always nbetween the upper bound and the lower bound. So these curves provde useful nformaton about the dstance between the optmal soluton and the current soluton, and help users to decde when to stop the programs. Fgure 7 shows the area versus delay tradeoff curve of a 6-bt adder. In our experment, we observe that to generate a new pont n the area and delay tradeoff curve, the subgradent optmzaton method converges n only about 5 teratons. It s because the vector of Lagrange multplers of the prevous pont s a good approxmaton for that of the new pont and hence the convergence of the subgradent optmzaton method s fast. As a result, generatng these tradeoff curves requres only a lttle bt more runtme but provdes precous nformaton.

17 VLSI CIRCUIT PERFORMANCE 53 Fgure 4. The runtme requrement of our program versus crcut sze. Fgure 5. The storage requrement of our program versus crcut sze. Fgure 6. The convergence sequence for a 28-bt adder.

18 54 CHU AND WONG 6. Concluson Fgure 7. The area versus delay tradeoff curve for a 6-bt adder. We have ntroduced a specal type of geometrc program called unary geometrc program, whch s of the followng form: Mnmze u(x,...,x n ) = α + β x + x γ j x x j n n subject to L x U for all n,,jn where α, β, γ j, L and U for all and j are non-negatve constants. We have shown that unary geometrc programs are very useful n VLSI crcut component szng. Many formulatons nvolvng delay, area and power can be reduced by the Lagrangan relaxaton technque to unary geometrc programs. We have presented a greedy algorthm to solve unary geometrc programs optmally and very effcently. We have proved that the algorthm converges to the optmal soluton f x s set to L for all n the startng soluton. We have also proved that when appled to VLSI crcut component szng, the algorthm always converges to the optmal soluton from any startng soluton n tme lnear to the number of gates or wre segments n the crcut. Appendx: Proof of lemma 4 In order to prove lemma 4, we need to prove lemmas 6, 7, and 8 frst. For lemmas 6 and 7, we focus on varable x k for some fxed k. Note that durng the n optmal local adjustment operatons just before the local adjustment of x k at a partcular teraton (except the frst teraton), each varable s adjusted exactly once.

19 VLSI CIRCUIT PERFORMANCE 55 Intutvely, the followng two lemmas show that durng these n adjustment operatons, f the changes n all varables are small, then the change n x k durng the local adjustment of x k at that teraton wll be even smaller. For some t, let x = (x,...,x n ), x = (x,...,x n ) and x = (x,...,x n ) be, respectvely, the solutons just before the local adjustment of x k at teraton t, t + and t + 2 of the greedy algorthm. Let q k = A k (x) + α k and q k B k (x) + β = A k (x ) + α k. k B k (x ) + β k So by lemma, x k = mn{u k, max{l k,q k }} and x k = mn{u k, max{l k,q k }}. Lemma 6. For any ρ>0, f then + ρ x x + ρ for all, + ρσ q k q k + ρσ. Proof. If x /( + ρ) x ( + ρ)x for all, wehave + ρ A k(x) A k (x ) ( + ρ)a k (x) and + ρ B k(x) B k (x ) ( + ρ)b k (x). Snce γ k 0andx U for all and k, wehave A k (x) = γ k x γ k U = A k (U,...,U n ). n, k n, k So by the defnton of φ k, φ k / ( + α k /A k (x)), or equvalently, Hence A k (x) φ k ( Ak (x) + α k ). A k (x ) + α k ( + ρ)a k (x) + α k = ρa k (x) + ( ) A k (x) + α k ( ) ( ) ρφ k Ak (x) + α k + Ak (x) + α k = ( + ρφ k ) ( ) A k (x) + α k (4)

20 56 CHU AND WONG and A k (x ) + α k + ρ A k(x) + α k = A k (x) + α k ρ + ρ A k(x) A k (x) + α k ρφ k ( ) Ak (x) + α k + ρ ( = ρφ ) k (Ak ) (x) + α k + ρ ( ) > Ak (x) + α k + ρφ k as ρ>0and0<φ k <. (5) Smlarly, snce γ kj 0andx j L j for all j and k, wehave B k (x) = jn, j k γ kj x j jn, j k γ kj L j = B k (L,...,L n ). So by the defnton of θ k, θ k /( + β k /B k (x)), or equvalently, B k (x) θ k ( Bk (x) + β k ). Hence we can prove smlarly that B k (x ) + β k ( + ρθ k ) ( ) B k (x) + β k and By defntons of q k q k = B k (x ) + β k > and q k, and by (4) and (7), we have A k (x ) + α k B k (x ) + β k (6) + ρθ k ( Bk (x) + β k ). (7) ( + ρφ k )( + ρθ k ) A k(x) + α k B k (x) + β k ( + ρ φ ) k + θ k q k (as geometrc mean arthmetc mean) 2 ( + ρσ)q k. Smlarly, by (5) and (6), we can prove that q k + ρ(φ k + θ k )/2 q k + ρσ q k.

21 VLSI CIRCUIT PERFORMANCE 57 Asaresult,/( + ρσ) q k /q k + ρσ. Lemma 7. For any ρ>0, f + ρ x + ρ for all, x then + ρσ x k x k + ρσ. Proof. By lemma 6, f x /( + ρ) x ( + ρ)x for all,thenq k /( + ρσ) q k ( + ρσ)q k. By lemma, x k = mn{u k, max{l k,q k }} and x k = mn{u k, max{l k,q k }}. In order to prove x k /( + ρσ) x k, we consder three cases: Case : q k <L k. Then x k = L k.so + ρσ x k = + ρσ L k <L k x k. Case 2: q k >U k. Then x k = U k.so + ρσ x k + ρσ U k <U k = x k. Case 3: q k L k and q k U k. Then q k L k x k q k and q k U k q k x k.so + ρσ x k + ρσ q k q k x k. In order to prove x k ( + ρσ)x k, we consder another three cases: Case : q k >U k. Then x k = U k.sox k U k <( + ρσ)u k = ( + ρσ)x k. Case 2: q k <L k. Then x k = L k.sox Case 3: q k U k and q k = L k <( + ρσ)l k ( + ρσ)x k. k L k. Then q k U k q k x k and q k L k x k q k.so x k Asaresult,/( + ρσ) x k /x k + ρσ. q k ( + ρσ)q k ( + ρσ)x k. Lemma 8 gves bounds on the changes of the varables after each teraton of the greedy algorthm. Be remnded that x (t) = (x (t),x(t) 2,...,x(t) n ) s the soluton just after the tth teraton of the greedy algorthm.

22 58 CHU AND WONG Lemma 8. For any t 0andρ>0, f then + ρ + ρσ x(t+) x (t) x(t+2) x (t+) + ρ for all, + ρσ for all. Proof. The lemma can be proved by nducton on. Base case: Consder the varable x. Before the local adjustment of x, the solutons at teraton t + andt + 2are(x (t),x(t) 2,...,x(t) n ) and (x(t+),x (t+) 2,...,x n (t+) ), respectvely. Snce /( + ρ) x (t+) /x (t) + ρ for all, by lemma 7, we have /( + ρσ) x (t+2) /x (t+) + ρσ. Inducton step: Assume that the nducton hypothess s true for =,...,k. Before the local adjustment of x k, the solutons at teraton t + andt + 2are(x (t+)...,x (t+) k,x(t) k,...,x (t+) ), respectvely. By nducton hypothess, Hence,...,x(t) n + ρσ + ρ ) and (x(t+2) as σ<. Also, t s gven that,...,x (t+2) k,x(t+) k x(t+2) x (t+) x(t+2) x (t+) + ρσ for =,...,k. + ρ for =,...,k,, n So by lemma 7, + ρ x(t+) x (t) + ρ for = k,...,n. Hence the lemma s proved. + ρσ x(t+2) k x (t+) k + ρσ. Proof of lemma 4. Ths can be proved by nducton on t. Base case: Consder t = 0. Note that for any soluton x = (x,...,x n ), L x U for all. Forall, x () x (0) U L + U L L +.

23 VLSI CIRCUIT PERFORMANCE 59 Smlarly, we can prove that for all, x () /x (0) /( + ). Inducton step: Assume that the nducton hypothess s true for t. Therefore, By lemma 8, x(t+) + σ t x (t) + σ t for all. x(t+2) + σ t+ x (t+) Hence the lemma s proved. + σ t+ for all. References [] M.S. Bazaraa, H.D. Sheral and C.M. Shetty, Nonlnear Programmng: Theory and Algorthms, 2nd ed. (Wley, 993). [2] C.-P. Chen, Y.-W. Chang and D.F. Wong, Fast performance-drven optmzaton for buffered clock trees based on Lagrangan relaxaton, n: Proc. ACM/IEEE Desgn Automaton Conf. (996) pp [3] C.-P. Chen, C.C.N. Chu and D.F. Wong, Fast and exact smultaneous gate and wre szng by Lagrangan relaxaton, IEEE Trans. Computer-Aded Desgn 8(7) (999) [4] C.-P. Chen, H. Zhou and D.F. Wong, Optmal non-unform wre-szng under the Elmore delay model, n: Proc. IEEE Intl. Conf. on Computer-Aded Desgn (996) pp [5] C.C.N. Chu and D.F. Wong, Greedy wre-szng s lnear tme, IEEE Trans. Computer-Aded Desgn 8(4) (999) [6] M.A. Crt, Transstor szng n CMOS crcuts, n: Proc. ACM/IEEE Desgn Automaton Conf. (987) pp [7] J. Cong and L. He, An effcent approach to smultaneous transstor and nterconnect szng, n: Proc. IEEE Intl. Conf. on Computer-Aded Desgn (996) pp [8] J. Cong and C.-K. Koh, Smultaneous drver and wre szng for performance and power optmzaton, n: Proc. IEEE Intl. Conf. on Computer-Aded Desgn (994) pp [9] J. Cong and K.-S. Leung, Optmal wreszng under the dstrbuted Elmore delay model, IEEE Trans. Computer-Aded Desgn 4(3) (995) [0] R.J. Duffn, E.L. Peterson and C. Zener, Geometrc Programmng Theory and Applcaton (Wley, NY, 967). [] W.C. Elmore, The transent response of damped lnear network wth partcular regard to wdeband amplfers, J. Appled Physcs 9 (948) [2] J.P. Fshburn and A.E. Dunlop, TILOS: A posynomnal programmng approach to transstor szng, n: Proc. IEEE Intl. Conf. on Computer-Aded Desgn (985) pp [3] D.G. Luenberger, Lnear and Nonlnear Programmng, 2nd ed. (Addson-Wesley, 984). [4] M.M. Mano, Dgtal Logc and Computer Desgn (Prentce-Hall, 979). [5] D.P. Marple, Performance optmzaton of dgtal VLSI crcuts, Techncal Report CSL-TR , Stanford Unversty (October 986). [6] D.P. Marple, Transstor sze optmzaton n the Talor layout system, n: Proc. ACM/IEEE Desgn Automaton Conf. (989) pp [7] N. Menezes, R. Baldck and L.T. Plegg, A sequental quadratc programmng approach to concurrent gate and wre szng, n: Proc. IEEE Intl. Conf. on Computer-Aded Desgn (995) pp

24 60 CHU AND WONG [8] N. Menezes, S. Pullela, F. Dartu and L.T. Plegg, RC nterconnect syntheses-a moment fttng approach, n: Proc. IEEE Intl. Conf. on Computer-Aded Desgn (994) pp [9] N. Menezes, S. Pullela and L.T. Plegg, Smultaneous gate and nterconnect szng for crcut level delay optmzaton, n: Proc. ACM/IEEE Desgn Automaton Conf. (995) pp [20] S.S. Sapatnekar, RC nterconnect optmzaton under the Elmore delay model, n: Proc. ACM/IEEE Desgn Automaton Conf. (994) pp [2] S.S. Sapatnekar, V.B. Rao, P.M. Vadya and S.M. Kang, An exact soluton to the transstor szng problem for CMOS crcuts usng convex optmzaton, IEEE Trans. Computer-Aded Desgn 2() (993) [22] J. Shyu, J.P. Fshburn, A.E. Dunlop and A.L. Sangovann-Vncentell, Optmzaton-based transstor szng, IEEE J. Sold-State Crcuts 23 (988) [23] SPEC table, ftp://ftp.cdf.toronto.edu/pub/spectable.

x i x i c i x CL Component index n = r R D C L RD x 3 x 5 B = {0, 3, 5, 8} m = 2 W = {1, 2, 4, 6, 7}

x i x i c i x CL Component index n = r R D C L RD x 3 x 5 B = {0, 3, 5, 8} m = 2 W = {1, 2, 4, 6, 7} A Polynomal Tme Optmal Algorthm for Smultaneous Buer and Wre Szng Chrs C. N. Chu andd.f.wong cnchu@cs.utexas.edu and wong@cs.utexas.edu Department of Computer Scences, Unversty of Texas at Austn, Austn,

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Statistical Circuit Optimization Considering Device and Interconnect Process Variations

Statistical Circuit Optimization Considering Device and Interconnect Process Variations Statstcal Crcut Optmzaton Consderng Devce and Interconnect Process Varatons I-Jye Ln, Tsu-Yee Lng, and Yao-Wen Chang The Electronc Desgn Automaton Laboratory Department of Electrcal Engneerng Natonal Tawan

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Interconnect Optimization for Deep-Submicron and Giga-Hertz ICs

Interconnect Optimization for Deep-Submicron and Giga-Hertz ICs Interconnect Optmzaton for Deep-Submcron and Gga-Hertz ICs Le He http://cadlab.cs.ucla.edu/~hele UCLA Computer Scence Department Los Angeles, CA 90095 Outlne Background and overvew LR-based STIS optmzaton

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Coarse-Grain MTCMOS Sleep

Coarse-Grain MTCMOS Sleep Coarse-Gran MTCMOS Sleep Transstor Szng Usng Delay Budgetng Ehsan Pakbazna and Massoud Pedram Unversty of Southern Calforna Dept. of Electrcal Engneerng DATE-08 Munch, Germany Leakage n CMOS Technology

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Fundamental loop-current method using virtual voltage sources technique for special cases

Fundamental loop-current method using virtual voltage sources technique for special cases Fundamental loop-current method usng vrtual voltage sources technque for specal cases George E. Chatzaraks, 1 Marna D. Tortorel 1 and Anastasos D. Tzolas 1 Electrcal and Electroncs Engneerng Departments,

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Simultaneous Device and Interconnect Optimization

Simultaneous Device and Interconnect Optimization Smultaneous Devce and Interconnect Optmaton Smultaneous devce and wre sng Smultaneous buffer nserton and wre sng Smultaneous topology constructon, buffer nserton and wre sng WBA tree (student presentaton)

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

( ) = ( ) + ( 0) ) ( )

( ) = ( ) + ( 0) ) ( ) EETOMAGNETI OMPATIBIITY HANDBOOK 1 hapter 9: Transent Behavor n the Tme Doman 9.1 Desgn a crcut usng reasonable values for the components that s capable of provdng a tme delay of 100 ms to a dgtal sgnal.

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

AN EFFICIENT TECHNIQUE FOR DEVICE AND INTERCONNECT OPTIMIZATION IN DEEP SUBMICRON DESIGNS. Jason Cong Lei He

AN EFFICIENT TECHNIQUE FOR DEVICE AND INTERCONNECT OPTIMIZATION IN DEEP SUBMICRON DESIGNS. Jason Cong Lei He AN EFFICIENT TECHNIQUE FOR DEVICE AND INTERCONNECT OPTIMIZATION IN DEEP SUBMICRON DESIGNS Jason Cong Le He Department of Computer Scence Unversty of Calforna, Los Angeles, CA 90095 cong@cs.ucla.edu, hele@cs.ucla.edu

More information

Effective Power Optimization combining Placement, Sizing, and Multi-Vt techniques

Effective Power Optimization combining Placement, Sizing, and Multi-Vt techniques Effectve Power Optmzaton combnng Placement, Szng, and Mult-Vt technques Tao Luo, Davd Newmark*, and Davd Z Pan Department of Electrcal and Computer Engneerng, Unversty of Texas at Austn *Advanced Mcro

More information

Clock-Gating and Its Application to Low Power Design of Sequential Circuits

Clock-Gating and Its Application to Low Power Design of Sequential Circuits Clock-Gatng and Its Applcaton to Low Power Desgn of Sequental Crcuts ng WU Department of Electrcal Engneerng-Systems, Unversty of Southern Calforna Los Angeles, CA 989, USA, Phone: (23)74-448 Massoud PEDRAM

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

A SEPARABLE APPROXIMATION DYNAMIC PROGRAMMING ALGORITHM FOR ECONOMIC DISPATCH WITH TRANSMISSION LOSSES. Pierre HANSEN, Nenad MLADENOVI]

A SEPARABLE APPROXIMATION DYNAMIC PROGRAMMING ALGORITHM FOR ECONOMIC DISPATCH WITH TRANSMISSION LOSSES. Pierre HANSEN, Nenad MLADENOVI] Yugoslav Journal of Operatons Research (00) umber 57-66 A SEPARABLE APPROXIMATIO DYAMIC PROGRAMMIG ALGORITHM FOR ECOOMIC DISPATCH WITH TRASMISSIO LOSSES Perre HASE enad MLADEOVI] GERAD and Ecole des Hautes

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business Amr s Supply Chan Model by S. Ashtab a,, R.J. Caron b E. Selvarajah c a Department of Industral Manufacturng System Engneerng b Department of Mathematcs Statstcs c Odette School of Busness Unversty of

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Canonical transformations

Canonical transformations Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,

More information

Physics 4B. A positive value is obtained, so the current is counterclockwise around the circuit.

Physics 4B. A positive value is obtained, so the current is counterclockwise around the circuit. Physcs 4B Solutons to Chapter 7 HW Chapter 7: Questons:, 8, 0 Problems:,,, 45, 48,,, 7, 9 Queston 7- (a) no (b) yes (c) all te Queston 7-8 0 μc Queston 7-0, c;, a;, d; 4, b Problem 7- (a) Let be the current

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Estimating Delays. Gate Delay Model. Gate Delay. Effort Delay. Computing Logical Effort. Logical Effort

Estimating Delays. Gate Delay Model. Gate Delay. Effort Delay. Computing Logical Effort. Logical Effort Estmatng Delas Would be nce to have a back of the envelope method for szng gates for speed Logcal Effort ook b Sutherland, Sproull, Harrs Chapter s on our web page Gate Dela Model Frst, normalze a model

More information

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION 1 2 MULTIPLIERLESS FILTER DESIGN Realzaton of flters wthout full-fledged multplers Some sldes based on support materal by W. Wolf for hs book Modern VLSI Desgn, 3 rd edton. Partly based on followng papers:

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Logic effort and gate sizing

Logic effort and gate sizing EEN454 Dgtal Integrated rcut Desgn Logc effort and gate szng EEN 454 Introducton hp desgners face a bewlderng arra of choces What s the best crcut topolog for a functon? How man stages of logc gve least

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

Second Order Analysis

Second Order Analysis Second Order Analyss In the prevous classes we looked at a method that determnes the load correspondng to a state of bfurcaton equlbrum of a perfect frame by egenvalye analyss The system was assumed to

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information