CS276 Homework 1: ns-2
|
|
- Oswald Richards
- 6 years ago
- Views:
Transcription
1 CS276 Homework 1: ns-2 Erik Peterson October 28, Part 1 - Fairness between TCP variants 1.1 Method After learning ns-2, I wrote a script (Listing 3) that runs a simulation of one or two tcp flows with a CBR traffic source for the given topology. The script takes a bit rate for the CBR traffic and one or two TCP types (Reno, Newreno, Vegas) for TCP flows. It configures the simulation and runs it for 15 seconds. At the end of the simulation it parses the trace file with an AWK script (Listing 4) designed to extract the loss rates and bandwidths from each flow. I then wrote a script (Listing 2) which runs a full set of CBR rates, given one or two TCP variants, by calling the first script a number of times. At the end of the script, a third script is called which generates loss and bandwidths for the set of experiments and copies the data to an archive directory. The graph script works by using sed to replace markers in a template gnuplot script file (Listing 5) with the appropriate values, and then running gnuplot. Finally, because I enjoy excessive automation, I made a meta script (Listing 1) which runs all of the desired experiments. This provides me with a single command to generate all of the appropriate graphs. 1.2 Observations Bandwidth An immediate and not very interesting result is that two TCP variants of the same type will tend to operate similarly and fairly. Much more interesting is how two different TCP variants respond to increasing CBR traffic. For example, in the interaction between Reno and NewReno (Figure 1(a)), it is clear that, though they operate fairly at lower CBR bandwidths, once 1
2 (a) (b) Figure 1: Pairwise TCP flow bandwidth comparisons in the presence of constant bit rate traffic. (a) 1 Reno flow and 1 NewReno flow, (b) NewReno flow and 1 Vegas flow bandwidth increases and packet losses become non-negligible, NewReno takes more bandwidth than Reno. This is likely due to NewReno s Fast Recovery enhancement in the presence of partial ACKs [2]; NewReno is able to respond very quickly to occasional single packet losses. The interaction between RewReno and Vegas is quite different (Figure 1(b)). Firstly, the two start out unfair, with NewReno taking as much bandwidth as it did when pitted against Reno and Vegas taking roughly 0.1Mb/s less bandwidth. This is puzzling, especially given the fact that there ought to be more bandwidth available. Once packet losses start to occur, as the CBR Bandwidth increases, NewReno continues to act unfairly in most situations. This is due to Vegas focus on reducing packet loss [1]; it responds to NewReno s aggressive policy by backing off. As we will see in Section 1.2.2, Vegas packet loss is superior in many cases. NewReno is more aggressive, with the downside of losing more packets when it exceeds its available bandwidth. The behavior of each TCP variant can be seen more clearly when they are run one at a time with the CBR traffic (Figure 2). In the presence of congestion, NewReno is clearly able to make use of higher bandwidth than Reno. Reno, too, tends to drop off precipitously while NewReno is smoother. Both Reno and NewReno achieve nearzero bandwidth with the CBR Bandwidth reaches the capacity of the link. Vegas 2
3 (a) (b) Figure 2: Single TCP flow bandwidth in the presence of constant bit rate traffic. (a) 1 Reno Flow (NewReno flow very similar), (b) 1 Vegas flow improves on NewReno by having a much smoother transition from full utilization to congestion. This is due to its predictive congestion avoidance mechanism [1]. Also of note is that, strangely, Vegas appears to achieve some non-trivial bandwidth, even in the face of CBR traffic above the link capacity. While I suspect that this is an artifact of my measurement process, it could be that Vegas determines some tiny packet send rate that is able to steal back some bandwidth from the CBR traffic. In all of these graphs, it is important to note that the graphs are flat for low CBR Bandwidth. This is because TCP is not given sufficient time to reach full utilization. Re-running small portions of the simulation for 20 seconds instead of 15 seconds confirms that this is the case Packet Loss The packet loss characteristics of Reno and NewReno are very similar. At somewhere around 6Mb/s CBR Bandwidth, both begin to experience non-trivial packet loss which increases exponentially (Figure 3(a)(c)), whether they are competing against each other or competing only against CBR traffic. NewReno, with its more aggressive retransmit scheme, tends to have more packet loss than plain Reno. Vegas performs quite differently. Since Vegas is optimized to avoid packet loss, it performs with near-zero packet loss until about 8Mb/s CBR Bandwidth, even when 3
4 (a) (b) Figure 3: Pairwise TCP flow packet loss comparisons in the presence of constant bit rate traffic. (a) 2 Reno flows, (b) 2 Vegas flows there are two Vegas flows competing. Unlike the Reno variants, Vegas was lowering its bandwidth without significantly increasing packet loss; in fact, comparing the bandwidth and packet loss traces of Vegas, it is clear that packet loss and bandwidth drop are not as closely correlated as they are in the Reno variants.. Even at the point where the CBR traffic fully loads the link, Vegas still maintains much lower packet losses, only because it becomes much more selective about when it sends packets. 2 Part 2 - Influence of Queuing 2.1 Method My method for part 2 followed closely with my method for part 1. I ran two different classes of experiments, one pitting a TCP flow against a UDP flow with different queuing disciplines, and another pitting three UDP flows against each other with different queuing disciplines. Again, I wrote a script for each experiment to perform the NS simulation (Listings 7, 11), render down the data into summary form (Listings 8,12), and set off the whole process (Listings 6, 10); template gnuplot scripts (Listings 9, 13) provide the graphs. The only new particularly new part in my methodology for Part 2 was the end-to-end latency calculation. While there is likely a more space-efficient way to do this calculation, I took advantage of awk s 4
5 arrays and store the send time for every packet. Then when the script encounters a packet receipt it can directly calculate the latency. Every other scheme that I tried resulted in poor results. 2.2 Observations TCP and UDP flows (a) (b) Figure 4: TCP Sack and a UDP flow under different queuing disciplines. (a) Drop- Tail, (b) RED The first set of observations deals with the effect of changing the queuing discipline on a TCP and a UDP flow competing for bandwidth. The Reno and Sack flows respond in a similar fashion to the change of queuing discipline. In both cases the TCP uses all of the bandwidth until the UDP flow starts, at which point the TCP flow is overwhelmed (Figure 4). An interesting side-effect of using RED is that the total throughput becomes more variable. This is because it is possible, with RED, that packets are dropped before congestion occurs (in fact, that is the point) [3]. For a connection with constant congestion, there will be many more, and earlier, drops than a simpler queuing scheme, so total throughput will drop. A related result is that the TCP flow will be forced into congestion control more often as the UDP flow will constantly take up as much bandwidth as possible. This 5
6 is why, between the graphs of RED and DropTail, one can see that the TCP flow oscillates significantly more under RED. Compared to Reno, SACK responds much better to the congestion. Though its throughput is continuously oscillating, it never drops below a certain level, because SACK generally recovers from packet loss at a rate faster than slow start[2] UDP flows (a) (b) Figure 5: Throughput of 3 UDP flows under different queuing disciplines. (a) Drop- Tail, (b) RED When multiple UDP flows are the only traffic competing for bandwidth, the effect of different queuing disciplines is quite different. For one, because of the lack of congestion control in the protocol, the total throughput is always fairly close to the maximum for the bottleneck link. The effect on the throughput of the individual flows is also worth noting. In the DropTail case (Figure 5(a)), it is clear that the scheme is not fair. Flows are favored at different times (Flow 3 at time 2, then Flow 1 at time 3), due entirely to which flow is unlucky enough to have its packets at the tail of the queue when dropping occurs. The flow receiving the greatest bandwidth is often receiving twice as much as the least, and the share for each flow appears to be oscillating slowly throughout the experiment. When using RED (Figure 5(b)), the scheme becomes much more fair. It is easy to 6
7 see that RED s random drop scheme results in the same average bandwidth for each flow. Unfortunately, it also results in significantly higher variance in the throughput of a particular flow. Additional insight can be gleaned from looking at the end-to-end latency of each flow. (a) (b) Figure 6: Latency of 3 UDP flows under different queuing disciplines. (a) DropTail, (b) RED The difference between the end-to-end latency for the two queuing disciplines is quite illuminating. In the case of DropTail (Figure 6(a)), the latency quickly rises to an average level of 225ms for each of the flows, and then oscillates around this level for the duration of the experiment. This is easily correlated to the length of the queue at any given time. DropTail has a threshold for its queue length, and the only control done by the router is to drop packets which arrive when the queue is filled to the threshold. This results in a steady, long queue. RED on the other hand approaches queue management in a smarter way. It applies control (i.e. packet drops) proportionally to the size of the queue, beyond some threshold. The end effect of this type of control is a queue size that oscillates around a smaller length (Figure 6(b)). This smaller queue size results in much better latency for all UDP flows, less than 25% of the latency seen with DropTail. 7
8 A Source Listings #!/ usr / bin / t c l s h Listing 1: Meta script to automate all of part 1 exec hw1 part1. t c l Reno Reno exec hw1 part1. t c l Newreno Reno exec hw1 part1. t c l Vegas Vegas exec hw1 part1. t c l Newreno Vegas exec hw1 part1. t c l Reno exec hw1 part1. t c l Newreno exec hw1 part1. t c l Vegas Listing 2: Main script to execute one pair of agents or one single agent #!/ usr / bin / t c l s h # check command l i n e i f { $argc < 1 $argc > 2 { puts Command Lin Usage : hw1 part1. t c l agent0 [ agent1 ] e x i t 1 s e t agent0 [ l i n d e x $argv 0 ] # run the cbr r a t e s with one or two tcp flows, then make the graphs and a r c h i v e the data i f { $argc == 2 { s e t agent1 [ l i n d e x $argv 1 ] e x e c echo CBR Loss CBR Avg Bandwidth TCP0 Loss TCP0 Avg Bandwidth TCP1 Loss TCP1 Avg Bandwidth > hw1 part1 $ { agent0 ${ agent1. data f o r { s e t i 0.5 { $ i <= 11.0 { s e t i [ expr { $ i ] { exec ns hw1 part1 onerate. t c l ${ i Mb $agent0 $agent1 exec hw1 part1 makegraphs. t c l $agent0 $agent1 exec mv hw1 part1 $ { agent0 ${ agent1. data. / data / 8
9 e l s e { e x e c echo CBR Loss CBR Avg Bandwidth TCP0 Loss TCP0 Avg Bandwidth > hw1 part1 $ { agent0. data f o r { s e t i 0.5 { $ i <= 11.0 { s e t i [ expr { $ i ] { exec ns hw1 part1 onerate. t c l ${ i Mb $agent0 exec hw1 part1 makegraphs. t c l $agent0 exec mv hw1 part1 $ { agent0. data. / data / exec rm hw1. out Listing 3: Script to run ns based on a single rate and pair of agents # at the end, c l o s e the f i l e and run the summary awk s c r i p t proc f i n i s h { { g l o b a l ns nf agent0 agent1 argc $ns f l u s h t r a c e c l o s e $nf i f { $argc == 3 { e x e c awk f hw1 part1 twographs summary. awk hw1. out >> hw1 part1 $ { agent0 ${ agent1. data e l s e i f { $argc == 2 { e x e c awk f hw1 part1 onegraph summary. awk hw1. out >> hw1 part1 $ { agent0. data e x i t 0 # check the command l i n e i f { $argc < 2 $argc > 3 { puts Command Line Usage : ns hw1 part1 onerate. t c l c b r r a t e agent0 [ agent1 ] e x i t 1 s e t c b r r a t e [ l i n d e x $argv 0 ] s e t agent0 [ l i n d e x $argv 1 ] i f { $argc == 3 { 9
10 s e t agent1 [ l i n d e x $argv 2 ] e l s e { s e t agent1 #c o n f i g u r e ns s e t ns [ new Simulator ] s e t nf [ open hw1. out w] $ns trace a l l $nf s e t n1 [ $ns node ] s e t n2 [ $ns node ] s e t n3 [ $ns node ] s e t n4 [ $ns node ] s e t n5 [ $ns node ] s e t n6 [ $ns node ] $ns duplex l i n k $n1 $n2 10Mb 10ms DropTail $ns duplex l i n k $n5 $n2 10Mb 10ms DropTail $ns duplex l i n k $n2 $n3 10Mb 10ms DropTail $ns duplex l i n k $n3 $n4 10Mb 10ms DropTail $ns duplex l i n k $n3 $n6 10Mb 10ms DropTail # CBR s ource c o n f i g s e t c b r s o u r c e [ new Agent/UDP] $ns attach agent $n2 $ c b r s o u r c e s e t c b r t r a f f i c [ new A p p l i c a t i o n / T r a f f i c /CBR] $ c b r t r a f f i c s e t p a c k e t S i z e 100 $ c b r t r a f f i c s e t r a t e $ c b r r a t e $ c b r t r a f f i c attach agent $ c b r s o u r c e s e t c b r s i n k [ new Agent/ Null ] $ns attach agent $n3 $ c b r s i n k # TCP flow #1 c o n f i g s e t t c p 0 s o u r c e [ new Agent/TCP/ $agent0 ] $ t c p 0 s o u r c e s e t f i d 0 $ns attach agent $n1 $ t c p 0 s o u r c e 10
11 s e t t c p 0 s i n k [ new Agent/TCPSink ] $ns attach agent $n4 $ t c p 0 s i n k s e t t c p 0 t r a f f i c [ new A p p l i c a t i o n /FTP] $ t c p 0 t r a f f i c attach agent $ t c p 0 s o u r c e # TCP flow #2 c o n f i g ( i f a p p l i c a b l e ) i f { $argc == 3 { s e t t c p 1 s o u r c e [ new Agent/TCP/ $agent1 ] $ t c p 1 s o u r c e s e t f i d 1 $ns attach agent $n5 $ t c p 1 s o u r c e s e t t c p 1 s i n k [ new Agent/TCPSink ] $ns attach agent $n6 $ t c p 1 s i n k s e t t c p 1 t r a f f i c [ new A p p l i c a t i o n /FTP] $ t c p 1 t r a f f i c attach agent $ t c p 1 s o u r c e $ns connect $ t c p 1 s o u r c e $ t c p 1 s i n k $ns connect $ c b r s o u r c e $ c b r s i n k $ns connect $ t c p 0 s o u r c e $ t c p 0 s i n k # s c hedule e v e r y t h i n g $ns at 0. 0 $ c b r t r a f f i c s t a r t $ns at 0. 0 $ t c p 0 t r a f f i c s t a r t i f { $argc == 3 { $ns at 0. 0 $ t c p 1 t r a f f i c s t a r t $ns at 15.0 $ c b r t r a f f i c stop $ns at 15.0 $ t c p 0 t r a f f i c stop i f { $argc == 3 { $ns at 15.0 $ t c p 1 t r a f f i c stop 11
12 $ns at 15.0 f i n i s h # GO! $ns run Listing 4: AWK script to generate one line of bandwidths and loss rates based on one run of the previous script # e x t r a c t bandwidth and l o s s r a t e / r. tcp / { i f ( $8 == 0 && $4 == 3) { b y t e s r e c e i v e d 0 += $6 e l s e i f ( $8 == 1 && $4 == 5) { b y t e s r e c e i v e d 1 += $6 /\+. tcp / { i f ( $8 == 0 && $3 == 0) { b y t e s s e n t 0 += $6 e l s e i f ( $8 == 1 && $3 == 4) { b y t e s s e n t 1 += $6 /\. cbr / { c b r r e c e i v e d += $6 /\+. cbr / { c b r s e n t += $6 END { p r i n t f ( % f %f %f %f %f %f \n, ( ( c b r s e n t c b r r e c e i v e d ) / c b r s e n t ) 100, ( c b r s e n t 8. 0 / ) / , ( ( b y t e s s e n t 0 b y t e s r e c e i v e d 0 ) / b y t e s s e n t 0 ) 100, ( b y t e s s e n t / ) / , ( ( b y t e s s e n t 1 b y t e s r e c e i v e d 1 ) / b y t e s s e n t 1 ) 100, ( b y t e s s e n t / ) / ) ; Listing 5: Template gnuplot file s e t output graphs /hw1 part1 AGENT1AGENT2 loss. png s e t terminal png s e t x l a b e l CBR Bandwidth (Mb/ s ) s e t data s t y l e l i n e s s e t key o u t s i d e s e t y l a b e l Loss Rate (%) 12
13 s e t yrange [ 0 : ] p l o t hw1 part1 AGENT1AGENT2. data using 2 : 3 t i t l e AGENT1 Loss Rate, \ hw1 part1 AGENT1AGENT2. data using 2 : 5 t i t l e AGENT2 Loss Rate, \ hw1 part1 AGENT1AGENT2. data using 2 : 1 t i t l e CBR Loss Rate s e t output graphs /hw1 part1 AGENT1AGENT2 bw. png s e t y l a b e l Average Bandwidth (Mb/ s ) s e t yrange [ 0 : 3 ] p l o t hw1 part1 AGENT1AGENT2. data using 2 : 4 t i t l e AGENT1 Avg Bandwidth, \ hw1 part1 AGENT1AGENT2. data using 2 : 6 t i t l e AGENT2 Avg Bandwidth #!/ usr / bin / t c l s h Listing 6: TCP/UDP Queuing Meta Tcl script exec ns hw1 part2 1. t c l Reno DropTail exec ns hw1 part2 1. t c l Reno RED exec ns hw1 part2 1. t c l Sack1 DropTail exec ns hw1 part2 1. t c l Sack1 RED Listing 7: TCP/UDP Queuing NS Tcl script # summarize the data and g e nerate the p l o t s proc f i n i s h { { g l o b a l ns nf agent queuing $ns f l u s h t r a c e c l o s e $nf e x e c echo Time CBR Throughput TCP Throughput CBR Loss TCP Loss Total Throughput > hw1 part2 1 $ { agent ${ queuing. data e x e c awk f hw1 part2 1 summary. awk hw1. out >> hw1 part2 1 $ { agent ${ queuing. data exec cat hw1 part2 1 graph template \ 13
14 sed s /AGENT/${ agent /g \ sed s /QUEUING/${ queuing /g \ gnuplot exec mv hw1 part2 1 $ { agent ${ queuing. data data / exec rm hw1. out e x i t 0 # p r o c e s s the command l i n e i f { $argc < 2 $argc > 2 { puts Command Line Usage : ns hw1 part2 1. t c l agent queuing e x i t 1 s e t agent [ l i n d e x $argv 0 ] s e t queuing [ l i n d e x $argv 1 ] # c o n f i g u r e ns s e t ns [ new Simulator ] s e t nf [ open hw1. out w] $ns trace a l l $nf s e t n1 [ $ns node ] s e t n2 [ $ns node ] s e t n3 [ $ns node ] s e t n4 [ $ns node ] s e t n5 [ $ns node ] s e t n6 [ $ns node ] $ns duplex l i n k $n1 $n2 10Mb 10ms DropTail $ns duplex l i n k $n5 $n2 10Mb 10ms DropTail $ns duplex l i n k $n2 $n3 1. 5Mb 10ms $queuing $ns duplex l i n k $n3 $n4 10Mb 10ms DropTail $ns duplex l i n k $n3 $n6 10Mb 10ms DropTail # CBR s ource s e t c b r s o u r c e [ new Agent/UDP] $ns attach agent $n5 $ c b r s o u r c e 14
15 s e t c b r t r a f f i c [ new A p p l i c a t i o n / T r a f f i c /CBR] $ c b r t r a f f i c s e t p a c k e t S i z e 500 $ c b r t r a f f i c s e t r a t e 1. 0Mbs $ c b r t r a f f i c attach agent $ c b r s o u r c e s e t c b r s i n k [ new Agent/ Null ] $ns attach agent $n6 $ c b r s i n k # TCP s ource s e t t c p 0 s o u r c e [ new Agent/TCP/ $agent ] $ t c p 0 s o u r c e s e t f i d 0 $ns attach agent $n1 $ t c p 0 s o u r c e i f { $agent == Sack1 { s e t t c p 0 s i n k [ new Agent/TCPSink/ Sack1 ] e l s e { s e t t c p 0 s i n k [ new Agent/TCPSink ] $ns attach agent $n4 $ t c p 0 s i n k s e t t c p 0 t r a f f i c [ new A p p l i c a t i o n /FTP] $ t c p 0 t r a f f i c s e t p a c k e t S i z e 1000 $ t c p 0 t r a f f i c attach agent $ t c p 0 s o u r c e $ns connect $ c b r s o u r c e $ c b r s i n k $ns connect $ t c p 0 s o u r c e $ t c p 0 s i n k # s c hedule e v e r y t h i n g... $ns at 0. 0 $ t c p 0 t r a f f i c s t a r t $ns at 2. 0 $ c b r t r a f f i c s t a r t $ns at 15.0 $ c b r t r a f f i c stop $ns at 15.0 $ t c p 0 t r a f f i c stop $ns at 15.0 f i n i s h 15
16 # go! $ns run Listing 8: TCP/UDP Queuing Summary AWK script # big, nasty awk s c r i p t to e x t r a c t l o s s r a t e and throughput by time f u n c t i o n r e s e t c o u n t e r s ( ) { c b r s e n t = 0 ; c b r r e c e i v e d = 0 ; b y t e s r e c e i v e d = 0 ; b y t e s s e n t = 0 ; f u n c t i o n p r i n t l i n e ( ) { c b r l o s s = ( c b r s e n t > 0)? ( ( ( c b r s e n t c b r r e c e i v e d ) / c b r s e n t ) ) : ( 0 ) ; b y t e s l o s s = ( b y t e s s e n t > 0)? ( ( ( b y t e s s e n t b y t e s r e c e i v e d ) / b y t e s s e n t ) ) : ( 0 ) ; cbr throughput = ( c b r r e c e i v e d 8. 0 / t i m e i n c r ) / ; bytes throughput = ( b y t e s r e c e i v e d 8. 0 / t i m e i n c r ) / ; p r i n t f ( % f %f %f %f %f %f \n, next time, cbr throughput, bytes throughput, c b r l o s s, b y t e s l o s s, cbr throughput + bytes throughput ) ; r e s e t c o u n t e r s ( ) ; next time = t i m e i n c r + next time 16
17 BEGIN { t i m e i n c r = ; max time = ; r e s e t c o u n t e r s ( ) ; next time = t i m e i n c r ; / r. tcp / { / r. cbr / { /\+. tcp / { /\+. cbr / { i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $4 == 3) { b y t e s r e c e i v e d += $6 i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $4 == 5) { c b r r e c e i v e d += $6 i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $3 == 0) { b y t e s s e n t += $6 ; i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $3 == 4) { c b r s e n t += $6 ; END { i f ( next time <= max time ) { p r i n t l i n e ( ) ; Listing 9: TCP/UDP Queuing gnuplot graph template s e t output graphs /hw1 part2 1 AGENTQUEUING throughput. png s e t terminal png 17
18 s e t x l a b e l Time ( s ) s e t data s t y l e l i n e s s e t key o u t s i d e s e t y l a b e l Throughput ( Kbps ) s e t yrange [ 0 : ] p l o t hw1 part2 1 AGENTQUEUING. data using 1 : 3 t i t l e AGENT Throughput, \ hw1 part2 1 AGENTQUEUING. data using 1 : 2 t i t l e CBR Throughput, \ hw1 part2 1 AGENTQUEUING. data using 1 : 6 t i t l e Total Throughput s e t output graphs /hw1 part2 1 AGENTQUEUING loss. png s e t y l a b e l Packet Loss (%) s e t yrange [ 0 : ] p l o t hw1 part2 1 AGENTQUEUING. data using 1 : 5 t i t l e AGENT Loss Rate, \ hw1 part2 1 AGENTQUEUING. data using 1 : 4 t i t l e CBR Loss Rate #!/ usr / bin / t c l s h exec ns hw1 part2 2. t c l DropTail exec ns hw1 part2 2. t c l RED Listing 10: UDP Queuing Meta Tcl script Listing 11: UDP Queuing NS Tcl script # summarize the data, g e nerate graphs proc f i n i s h { { g l o b a l ns nf queuing $ns f l u s h t r a c e c l o s e $nf e x e c echo Time UDP1 Throughput UDP2 Throughput UDP3 Throughput Total Throughput UDP1 Latency 18
19 UDP2 Latency UDP3 Latency > hw1 part2 2 $ { queuing. data e x e c awk f hw1 part2 2 summary. awk hw1. out >> hw1 part2 2 $ { queuing. data exec cat hw1 part2 2 graph template \ sed s /QUEUING/${ queuing /g \ gnuplot exec mv hw1 part2 2 $ { queuing. data data / #exec rm hw1. out e x i t 0 #p r o c e s s the command l i n e i f { $argc < 1 $argc > 2 { puts Command Line Usage : ns hw1 part2 2. t c l queuing e x i t 1 s e t queuing [ l i n d e x $argv 0 ] # c o n f i g u r e ns s e t ns [ new Simulator ] s e t nf [ open hw1. out w] $ns trace a l l $nf s e t n1 [ $ns node ] s e t n2 [ $ns node ] $ns duplex l i n k $n1 $n2 1. 5Mbp 10ms $queuing s e t p a c k e t s i z e s ( 0 ) 1000 s e t p a c k e t s i z e s ( 1 ) 1000 s e t p a c k e t s i z e s ( 2 ) 500 s e t r a t e s ( 0 ) 1Mpbs s e t r a t e s ( 1 ) 1Mbps s e t r a t e s ( 2 ) 0. 6 Mbps # c o n f i g u r e t h r e e CBR t r a f f i c s o u r c e s and s i n k s f o r { s e t i 0 { $ i < 3 { i n c r i { 19
20 s e t c b r s o u r c e ( $ i ) [ new Agent/UDP] $ c b r s o u r c e ( $ i ) s e t f i d $ i $ns attach agent $n1 $ c b r s o u r c e ( $ i ) s e t c b r t r a f f i c ( $ i ) [ new A p p l i c a t i o n / T r a f f i c /CBR] $ c b r t r a f f i c ( $ i ) s e t p a c k e t S i z e $ p a c k e t s i z e s ( $ i ) $ c b r t r a f f i c ( $ i ) s e t r a t e $ r a t e s ( $ i ) $ c b r t r a f f i c ( $ i ) attach agent $ c b r s o u r c e ( $ i ) s e t c b r s i n k ( $ i ) [ new Agent/ Null ] $ns attach agent $n2 $ c b r s i n k ( $ i ) $ns connect $ c b r s o u r c e ( $ i ) $ c b r s i n k ( $ i ) # s c hedule everyone $ns at 0. 0 $ c b r t r a f f i c ( 0 ) s t a r t $ns at 0. 1 $ c b r t r a f f i c ( 1 ) s t a r t $ns at 0. 2 $ c b r t r a f f i c ( 2 ) s t a r t $ns at 5. 0 $ c b r t r a f f i c ( 0 ) stop $ns at 5. 0 $ c b r t r a f f i c ( 1 ) stop $ns at 5. 0 $ c b r t r a f f i c ( 2 ) stop $ns at 5. 0 f i n i s h # and go! $ns run Listing 12: UDP Queuing Summary AWK script # awk s c r i p t to e x t r a c t throughput and l a t e n c y f u n c t i o n r e s e t c o u n t e r s ( ) { b y t e s s [ 0 ] = 0 ; b y t e s s [ 1 ] = 0 ; b y t e s s [ 2 ] = 0 ; b y t e s r [ 0 ] = 0 ; b y t e s r [ 1 ] = 0 ; b y t e s r [ 2 ] = 0 ; 20
21 t i m e s s [ 0 ] = 0 ; t i m e s s [ 1 ] = 0 ; t i m e s s [ 2 ] = 0 ; t i m e s r [ 0 ] = 0 ; t i m e s r [ 1 ] = 0 ; t i m e s r [ 2 ] = 0 ; pack r [ 0 ] = 0 ; pack r [ 1 ] = 0 ; pack r [ 2 ] = 0 ; f u n c t i o n p r i n t l i n e ( ) { f o r ( i =0; i <3; i ++) { throughput [ i ] = ( b y t e s r [ i ] 8. 0 / t i m e i n c r ) / ; i f ( pack r [ i ]!= 0) { l a t e n c y [ i ] = 1000 ( t i m e s r [ i ] ) / pack r [ i ] ; e l s e { l a t e n c y [ i ] = 0 ; p r i n t f ( % f %f %f %f %f %f %f %f \n, next time, throughput [ 0 ], throughput [ 1 ], throughput [ 2 ], throughput [ 0 ] + throughput [ 1 ] + throughput [ 2 ], l a t e n c y [ 0 ], l a t e n c y [ 1 ], l a t e n c y [ 2 ] ) ; r e s e t c o u n t e r s ( ) ; 21
22 next time = t i m e i n c r + next time BEGIN { t i m e i n c r = 0. 1 ; max time = 5. 0 ; r e s e t c o u n t e r s ( ) ; next time = t i m e i n c r ; / r. cbr / { /\+. cbr / { i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $4 == 1) { b y t e s r [ $8 ] += $6 ; t i m e s r [ $8 ] += $2 t i m e s s [ $12 ] ; pack r [ $8 ]++; i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $3 == 0) { b y t e s s [ $8 ] += $6 ; t i m e s s [ $12 ] = $2 ; pack s [ $8]++ END { i f ( next time <= max time ) { p r i n t l i n e ( ) ; 22
23 Listing 13: UDP Queuing gnuplot graph template s e t output graphs /hw1 part2 2 QUEUING throughput. png s e t terminal png s e t x l a b e l Time ( s ) s e t data s t y l e l i n e s s e t key o u t s i d e s e t y l a b e l Throughput ( Kbps ) s e t yrange [ 0 : ] p l o t hw1 part2 2 QUEUING. data using 1 : 2 t i t l e UDP Flow 1 Throughput, \ hw1 part2 2 QUEUING. data using 1 : 3 t i t l e UDP Flow 2 Throughput, \ hw1 part2 2 QUEUING. data using 1 : 4 t i t l e UDP Flow 3 Throughput, \ hw1 part2 2 QUEUING. data using 1 : 5 t i t l e Total Throughput s e t output graphs /hw1 part2 2 QUEUING delay. png s e t y l a b e l Latency (ms) s e t yrange [ 0 : ] p l o t hw1 part2 2 QUEUING. data using 1 : 6 t i t l e UDP Flow 1 Latency, \ hw1 part2 2 QUEUING. data using 1 : 7 t i t l e UDP Flow 2 Latency, \ hw1 part2 2 QUEUING. data using 1 : 8 t i t l e UDP Flow 3 Latency 23
24 References [1] L. S. Brakmo, S. W. O Malley, and L. L. Peterson. TCP vegas: New techniques for congestion detection and avoidance. In SIGCOMM, pages 24 35, [2] K. Fall and S. Floyd. Simulation-based comparisons of Tahoe, Reno and SACK TCP. Computer Communication Review, 26(3):5 21, July [3] S. Floyd and V. Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on Networking, 1(4): , August
Wireless Internet Exercises
Wireless Internet Exercises Prof. Alessandro Redondi 2018-05-28 1 WLAN 1.1 Exercise 1 A Wi-Fi network has the following features: Physical layer transmission rate: 54 Mbps MAC layer header: 28 bytes MAC
More informationcs/ee/ids 143 Communication Networks
cs/ee/ids 143 Communication Networks Chapter 4 Transport Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Agenda Internetworking n Routing across LANs, layer2-layer3 n DHCP n NAT Transport layer
More informationFairness comparison of FAST TCP and TCP Vegas
Fairness comparison of FAST TCP and TCP Vegas Lachlan L. H. Andrew, Liansheng Tan, Tony Cui, and Moshe Zukerman ARC Special Research Centre for Ultra-Broadband Information Networks (CUBIN), an affiliated
More informationWindow Size. Window Size. Window Size. Time. Time. Time
A Spectrum of TCP-friendly Window-based Congestion Control Algorithms Λ Shudong Jin Liang Guo Ibrahim Matta Azer Bestavros Computer Science Department Boston University Boston, MA 5 fjins, guol, matta,
More informationPerformance Analysis of Priority Queueing Schemes in Internet Routers
Conference on Information Sciences and Systems, The Johns Hopkins University, March 8, Performance Analysis of Priority Queueing Schemes in Internet Routers Ashvin Lakshmikantha Coordinated Science Lab
More informationMin Congestion Control for High- Speed Heterogeneous Networks. JetMax: Scalable Max-Min
JetMax: Scalable Max-Min Min Congestion Control for High- Speed Heterogeneous Networks Yueping Zhang Joint work with Derek Leonard and Dmitri Loguinov Internet Research Lab Department of Computer Science
More informationImpact of Cross Traffic Burstiness on the Packet-scale Paradigm An Extended Analysis
Impact of ross Traffic Burstiness on the Packet-scale Paradigm An Extended Analysis Rebecca Lovewell and Jasleen Kaur Technical Report # TR11-007 Department of omputer Science University of North arolina
More informationPIQI-RCP: Design and Analysis of Rate-Based Explicit Congestion Control
PIQI-RCP: Design and Analysis of Rate-Based Explicit Congestion Control Saurabh Jain Joint work with Dr. Dmitri Loguinov June 21, 2007 1 Agenda Introduction Analysis of RCP QI-RCP PIQI-RCP Comparison Wrap
More informationTCP over Cognitive Radio Channels
1/43 TCP over Cognitive Radio Channels Sudheer Poojary Department of ECE, Indian Institute of Science, Bangalore IEEE-IISc I-YES seminar 19 May 2016 2/43 Acknowledgments The work presented here was done
More informationDIMENSIONING BANDWIDTH FOR ELASTIC TRAFFIC IN HIGH-SPEED DATA NETWORKS
Submitted to IEEE/ACM Transactions on etworking DIMESIOIG BADWIDTH FOR ELASTIC TRAFFIC I HIGH-SPEED DATA ETWORKS Arthur W. Berger * and Yaakov Kogan Abstract Simple and robust engineering rules for dimensioning
More informationA Stochastic Model for TCP with Stationary Random Losses
A Stochastic Model for TCP with Stationary Random Losses Eitan Altman, Kostya Avrachenkov Chadi Barakat INRIA Sophia Antipolis - France ACM SIGCOMM August 31, 2000 Stockholm, Sweden Introduction Outline
More informationCapturing Network Traffic Dynamics Small Scales. Rolf Riedi
Capturing Network Traffic Dynamics Small Scales Rolf Riedi Dept of Statistics Stochastic Systems and Modelling in Networking and Finance Part II Dependable Adaptive Systems and Mathematical Modeling Kaiserslautern,
More informationProcessor Sharing Flows in the Internet
STANFORD HPNG TECHNICAL REPORT TR4-HPNG4 Processor Sharing Flows in the Internet Nandita Dukkipati, Nick McKeown Computer Systems Laboratory Stanford University Stanford, CA 9434-93, USA nanditad, nickm
More informationA Mathematical Model of the Skype VoIP Congestion Control Algorithm
A Mathematical Model of the Skype VoIP Congestion Control Algorithm Luca De Cicco, S. Mascolo, V. Palmisano Dipartimento di Elettrotecnica ed Elettronica, Politecnico di Bari 47th IEEE Conference on Decision
More informationAnalysis of Scalable TCP in the presence of Markovian Losses
Analysis of Scalable TCP in the presence of Markovian Losses E Altman K E Avrachenkov A A Kherani BJ Prabhu INRIA Sophia Antipolis 06902 Sophia Antipolis, France Email:altman,kavratchenkov,alam,bprabhu}@sophiainriafr
More informationUnderstanding TCP Vegas: A Duality Model
Understanding TCP Vegas: A Duality Model Steven Low Departments of CS and EE, Caltech, USA slow@caltech.edu Larry Peterson Limin Wang Department of CS, Princeton University, USA {llp,lmwang}@cs.princeton.edu
More informationTCP-friendly SIMD Congestion Control and Its Convergence Behavior
Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports 1-5-8 TCP-friendly SIMD Congestion Control and Its Convergence Behavior Jin, Shudong Boston University
More informationWiFi MAC Models David Malone
WiFi MAC Models David Malone November 26, MACSI Hamilton Institute, NUIM, Ireland Talk outline Introducing the 82.11 CSMA/CA MAC. Finite load 82.11 model and its predictions. Issues with standard 82.11,
More informationOSCILLATION AND PERIOD DOUBLING IN TCP/RED SYSTEM: ANALYSIS AND VERIFICATION
International Journal of Bifurcation and Chaos, Vol. 18, No. 5 (28) 1459 1475 c World Scientific Publishing Company OSCILLATION AND PERIOD DOUBLING IN TCP/RED SYSTEM: ANALYSIS AND VERIFICATION XI CHEN,
More informationRate Control in Communication Networks
From Models to Algorithms Department of Computer Science & Engineering The Chinese University of Hong Kong February 29, 2008 Outline Preliminaries 1 Preliminaries Convex Optimization TCP Congestion Control
More informationDynamic resource sharing
J. Virtamo 38.34 Teletraffic Theory / Dynamic resource sharing and balanced fairness Dynamic resource sharing In previous lectures we have studied different notions of fair resource sharing. Our focus
More informationDIMENSIONING BANDWIDTH FOR ELASTIC TRAFFIC IN HIGH-SPEED DATA NETWORKS
Submitted to IEEE/ACM Transactions on etworking DIMESIOIG BADWIDTH FOR ELASTIC TRAFFIC I HIGH-SPEED DATA ETWORKS Arthur W. Berger and Yaakov Kogan AT&T Labs 0 Crawfords Corner Rd. Holmdel J, 07733 U.S.A.
More informationUnderstanding TCP Vegas: A Duality Model
Understanding TCP Vegas: A Duality Model STEVEN H. LOW Caltech, Pasadena, California AND LARRY L. PETERSON AND LIMIN WANG Princeton University, Princeton, New Jersey Abstract. We view congestion control
More informationModelling TCP with a Discrete Time Markov Chain
Modelling TCP with a Discrete Time Markov Chain José L Gil Motorola josegil@motorola.com ABSTRACT TCP is the most widely used transport protocol in the Internet. The end-to-end performance of most Internet
More informationStability Analysis of TCP/RED Communication Algorithms
Stability Analysis of TCP/RED Communication Algorithms Ljiljana Trajković Simon Fraser University, Vancouver, Canada ljilja@cs.sfu.ca http://www.ensc.sfu.ca/~ljilja Collaborators Mingjian Liu and Hui Zhang
More informationModelling an Isolated Compound TCP Connection
Modelling an Isolated Compound TCP Connection Alberto Blanc and Denis Collange Orange Labs 905 rue Albert Einstein Sophia Antipolis, France {Email: alberto.blanc,denis.collange}@orange-ftgroup.com Konstantin
More informationCompound TCP with Random Losses
Compound TCP with Random Losses Alberto Blanc 1, Konstantin Avrachenkov 2, Denis Collange 1, and Giovanni Neglia 2 1 Orange Labs, 905 rue Albert Einstein, 06921 Sophia Antipolis, France {alberto.blanc,denis.collange}@orange-ftgroup.com
More informationA Different Kind of Flow Analysis. David M Nicol University of Illinois at Urbana-Champaign
A Different Kind of Flow Analysis David M Nicol University of Illinois at Urbana-Champaign 2 What Am I Doing Here??? Invite for ICASE Reunion Did research on Peformance Analysis Supporting Supercomputing
More informationPerformance Effects of Two-way FAST TCP
Performance Effects of Two-way FAST TCP Fei Ge a, Sammy Chan b, Lachlan L. H. Andrew c, Fan Li b, Liansheng Tan a, Moshe Zukerman b a Dept. of Computer Science, Huazhong Normal University, Wuhan, P.R.China
More informationLecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking
Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov
More informationIN THIS PAPER, we describe a design oriented modelling
616 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 14, NO 3, JUNE 2006 A Positive Systems Model of TCP-Like Congestion Control: Asymptotic Results Robert Shorten, Fabian Wirth, and Douglas Leith Abstract We
More informationThese are special traffic patterns that create more stress on a switch
Myths about Microbursts What are Microbursts? Microbursts are traffic patterns where traffic arrives in small bursts. While almost all network traffic is bursty to some extent, storage traffic usually
More informationModeling and Stability of PERT
Modeling Stability of PET Yueping Zhang yueping@cs.tamu.edu I. SYSTEM MODEL Our modeling of PET is composed of three parts: window adjustment ED emulation queuing behavior. We start with the window dynamics.
More informationStability Analysis of TCP/RED Communication Algorithms
Stability Analysis of TCP/RED Communication Algorithms Ljiljana Trajković Simon Fraser University, Vancouver, Canada ljilja@cs.sfu.ca http://www.ensc.sfu.ca/~ljilja Collaborators Mingjian Liu and Hui Zhang
More informationMixed Stochastic and Event Flows
Simulation Mixed and Event Flows Modeling for Simulation Dynamics Robert G. Cole 1, George Riley 2, Derya Cansever 3 and William Yurcick 4 1 Johns Hopkins University 2 Georgia Institue of Technology 3
More informationAnalysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming
Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Min Dai Electrical Engineering, Texas A&M University Dmitri Loguinov Computer Science, Texas A&M University
More informationWindow Flow Control Systems with Random Service
Window Flow Control Systems with Random Service Alireza Shekaramiz Joint work with Prof. Jörg Liebeherr and Prof. Almut Burchard April 6, 2016 1 / 20 Content 1 Introduction 2 Related work 3 State-of-the-art
More informationNICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1
NICTA Short Course Network Analysis Vijay Sivaraman Day 1 Queueing Systems and Markov Chains Network Analysis, 2008s2 1-1 Outline Why a short course on mathematical analysis? Limited current course offering
More informationCongestion Control. Need to understand: What is congestion? How do we prevent or manage it?
Congestion Control Phenomenon: when too much traffic enters into system, performance degrades excessive traffic can cause congestion Problem: regulate traffic influx such that congestion does not occur
More informationAnalysis of the Increase and Decrease. Congestion Avoidance in Computer Networks
Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks Dah-Ming Chiu, Raj Jain Presented by: Ashish Vulimiri Congestion Control Congestion Avoidance Congestion Avoidance
More information2 Department of ECE, Jayaram College of Engineering and Technology, Pagalavadi, Trichy,
End-to-End Congestion Control using Polynomial Algorithms in Wired TCP Networs M.Chandrasearan, M.Kalpana, 2 and Dr.R.S.D.Wahida Banu 3 Assistant Professor, Department of ECE, Government College of Engineering,
More informationCompound TCP with Random Losses
Compound TCP with Random Losses Alberto Blanc 1, Konstantin Avrachenkov 2, Denis Collange 1, and Giovanni Neglia 2 1 Orange Labs, 905 rue Albert Einstein, 06921 Sophia Antipolis, France {alberto.blanc,denis.collange}@orange-ftgroup.com
More informationDiscrete-event simulations
Discrete-event simulations Lecturer: Dmitri A. Moltchanov E-mail: moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Why do we need simulations? Step-by-step simulations; Classifications;
More informationSize-based Adaptive Bandwidth Allocation:
Size-based Adaptive Bandwidth Allocation: Optimizing the Average QoS for Elastic Flows Shanchieh Yang (scyang@ece.utexas.edu), Gustavo de Veciana (gustavo@ece.utexas.edu) Department of Electrical and Computer
More informationRunning jobs on the CS research cluster
C o m p u t e r S c i e n c e S e m i n a r s Running jobs on the CS research cluster How to get results while web surfing! by October 10, 2007 O u t l i n e Motivation Overview of CS cluster(s) Secure
More informationcommunication networks
Positive matrices associated with synchronised communication networks Abraham Berman Department of Mathematics Robert Shorten Hamilton Institute Douglas Leith Hamilton Instiute The Technion NUI Maynooth
More informationScheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling
Scheduling I Today! Introduction to scheduling! Classical algorithms Next Time! Advanced topics on scheduling Scheduling out there! You are the manager of a supermarket (ok, things don t always turn out
More informationTheoretical Analysis of Performances of TCP/IP Congestion Control Algorithm with Different Distances
Theoretical Analysis of Performances of TCP/IP Congestion Control Algorithm with Different Distances Tsuyoshi Ito and Mary Inaba Department of Computer Science, The University of Tokyo 7-3-1 Hongo, Bunkyo-ku,
More informationLeopold Franzens University Innsbruck. Responding to Spurious Loss Events in TCP/IP. Master Thesis. Institute of Computer Science
Leopold Franzens University Innsbruck Institute of Computer Science Distributed and Parallel Systems Group Responding to Spurious Loss Events in TCP/IP Master Thesis Supervisor: Dr. Michael Welzl Author:
More informationOperational Laws Raj Jain
Operational Laws 33-1 Overview What is an Operational Law? 1. Utilization Law 2. Forced Flow Law 3. Little s Law 4. General Response Time Law 5. Interactive Response Time Law 6. Bottleneck Analysis 33-2
More informationA Quantitative View: Delay, Throughput, Loss
A Quantitative View: Delay, Throughput, Loss Antonio Carzaniga Faculty of Informatics University of Lugano September 27, 2017 Outline Quantitative analysis of data transfer concepts for network applications
More information384Y Project June 5, Stability of Congestion Control Algorithms Using Control Theory with an application to XCP
384Y Project June 5, 00 Stability of Congestion Control Algorithms Using Control Theory with an application to XCP . Introduction During recent years, a lot of work has been done towards the theoretical
More informationA Generalized FAST TCP Scheme
A Generalized FAST TCP Scheme Cao Yuan a, Liansheng Tan a,b, Lachlan L. H. Andrew c, Wei Zhang a, Moshe Zukerman d,, a Department of Computer Science, Central China Normal University, Wuhan 430079, P.R.
More informationInternet Congestion Control: Equilibrium and Dynamics
Internet Congestion Control: Equilibrium and Dynamics A. Kevin Tang Cornell University ISS Seminar, Princeton University, February 21, 2008 Networks and Corresponding Theories Power networks (Maxwell Theory)
More informationA Theoretical Study of Internet Congestion Control: Equilibrium and Dynamics
A Theoretical Study of Internet Congestion Control: Equilibrium and Dynamics Thesis by Jiantao Wang In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute
More informationGenerating Random Variates II and Examples
Generating Random Variates II and Examples Holger Füßler Holger Füßler Universität Mannheim Summer 2004 Side note: TexPoint» TexPoint is a Powerpoint add-in that enables the easy use of Latex symbols and
More informationA New Technique for Link Utilization Estimation
A New Technique for Link Utilization Estimation in Packet Data Networks using SNMP Variables S. Amarnath and Anurag Kumar* Dept. of Electrical Communication Engineering Indian Institute of Science, Bangalore
More informationTHE prediction of network behavior is an important task for
TCP Networ Calculus: The case of large delay-bandwidth product Eitan Altman, Konstantin Avrachenov, Chadi Baraat Abstract We present in this paper an analytical model for the calculation of networ load
More informationConcise Paper: Deconstructing MPTCP Performance
04 IEEE nd International Conference on Network Protocols Concise Paper: Deconstructing MPTCP Performance Behnaz Arzani, Alexander Gurney, Sitian Cheng, Roch Guerin and Boon Thau Loo University Of Pennsylvania
More informationNetwork Optimization and Control
Foundations and Trends R in Networking Vol. 2, No. 3 (2007) 271 379 c 2008 S. Shakkottai and R. Srikant DOI: 10.1561/1300000007 Network Optimization and Control Srinivas Shakkottai 1 and R. Srikant 2 1
More informationImpact of Queueing Delay Estimation Error on Equilibrium and Its Stability
Impact of Queueing Delay Estimation Error on Equilibrium and Its Stability Corentin Briat, Emre A. Yavuz, and Gunnar Karlsson ACCESS Linnaeus Center, KTH, SE-100 44 Stockholm, Sweden {cbriat,emreya,gk}@kth.se
More informationCongestion Control In The Internet Part 1: Theory. JY Le Boudec 2018
Congestion Control In The Internet Part 1: Theory JY Le Boudec 2018 1 Contents 1. What is the problem; congestion collapse 2. Efficiency versus Fairness 3. Definitions of fairness 4. Additive Increase
More informationCongestion Control. Phenomenon: when too much traffic enters into system, performance degrades excessive traffic can cause congestion
Congestion Control Phenomenon: when too much traffic enters into system, performance degrades excessive traffic can cause congestion Problem: regulate traffic influx such that congestion does not occur
More informationA positive systems model of TCP-like congestion control: Asymptotic results
A positive systems model of TCP-like congestion control: Asymptotic results Robert Shorten Fabian Wirth Douglas Leith Abstract In this paper we study communication networks that employ drop-tail queueing
More informationContinuous-time hidden Markov models for network performance evaluation
Performance Evaluation 49 (2002) 129 146 Continuous-time hidden Markov models for network performance evaluation Wei Wei, Bing Wang, Don Towsley Department of Computer Science, University of Massachusetts,
More informationA positive systems model of TCP-like congestion control: Asymptotic results
IEEE/ACM TRANSACTIONS ON NETWORKING A positive systems model of TCP-like congestion control: Asymptotic results Robert Shorten, Fabian Wirth, Douglas Leith Abstract We study communication networks that
More informationAppendix A Prototypes Models
Appendix A Prototypes Models This appendix describes the model of the prototypes used in Chap. 3. These mathematical models can also be found in the Student Handout by Quanser. A.1 The QUANSER SRV-02 Setup
More informationTCP modeling in the presence of nonlinear window growth
TCP modeling in the presence of nonlinear window growth Eitan Altman, Kostia Avrachenkov, Chadi Barakat Rudesindo Núñez-Queija Abstract We develop a model for TCP that accounts for both sublinearity and
More informationcs/ee/ids 143 Communication Networks
cs/ee/ids 143 Communication Networks Chapter 5 Routing Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Warning These notes are not self-contained, probably not understandable, unless you also
More informationCapacity management for packet-switched networks with heterogeneous sources. Linda de Jonge. Master Thesis July 29, 2009.
Capacity management for packet-switched networks with heterogeneous sources Linda de Jonge Master Thesis July 29, 2009 Supervisors Dr. Frank Roijers Prof. dr. ir. Sem Borst Dr. Andreas Löpker Industrial
More informationInformation in Aloha Networks
Achieving Proportional Fairness using Local Information in Aloha Networks Koushik Kar, Saswati Sarkar, Leandros Tassiulas Abstract We address the problem of attaining proportionally fair rates using Aloha
More informationCS 798: Homework Assignment 3 (Queueing Theory)
1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of
More informationUtility, Fairness and Rate Allocation
Utility, Fairness and Rate Allocation Laila Daniel and Krishnan Narayanan 11th March 2013 Outline of the talk A rate allocation example Fairness criteria and their formulation as utilities Convex optimization
More informationA flow-based model for Internet backbone traffic
A flow-based model for Internet backbone traffic Chadi Barakat, Patrick Thiran Gianluca Iannaccone, Christophe iot Philippe Owezarski ICA - SC - EPFL Sprint Labs LAAS-CNRS {Chadi.Barakat,Patrick.Thiran}@epfl.ch
More informationATM VP-Based Ring Network Exclusive Video or Data Traffics
ATM VP-Based Ring Network Exclusive Video or Data Traffics In this chapter, the performance characteristic of the proposed ATM VP-Based Ring Network exclusive video or data traffic is studied. The maximum
More informationComparison of TCP Reno and TCP Vegas via Fluid Approximation
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Comparison of TCP Reno and TCP Vegas via Fluid Approximation Thomas Bonald N 3563 Novembre 1998 THÈME 1 apport de recherche ISSN 0249-6399
More informationTransient Behaviors of TCP-friendly Congestion Control Protocols
Abbreviated version in Proceedings of IEEE INFOCOM 21, April 21. Transient Behaviors of -friendly Congestion Control Protocols Y. Richard Yang, Min Sik Kim, Simon S. Lam Department of Computer Sciences
More informationCSE 123: Computer Networks
CSE 123: Computer Networks Total points: 40 Homework 1 - Solutions Out: 10/4, Due: 10/11 Solutions 1. Two-dimensional parity Given below is a series of 7 7-bit items of data, with an additional bit each
More informationEnd-to-end Estimation of the Available Bandwidth Variation Range
1 End-to-end Estimation of the Available Bandwidth Variation Range Manish Jain Georgia Tech jain@cc.gatech.edu Constantinos Dovrolis Georgia Tech dovrolis@cc.gatech.edu Abstract The available bandwidth
More informationRate adaptation, Congestion Control and Fairness: A Tutorial. JEAN-YVES LE BOUDEC Ecole Polytechnique Fédérale de Lausanne (EPFL)
Rate adaptation, Congestion Control and Fairness: A Tutorial JEAN-YVES LE BOUDEC Ecole Polytechnique Fédérale de Lausanne (EPFL) December 2000 2 Contents 31 Congestion Control for Best Effort: Theory 1
More informationA Utility-Based Congestion Control Scheme for Internet-Style Networks with Delay
A Utility-Based ongestion ontrol Scheme for Internet-Style Networks with Delay Tansu Alpcan and Tamer Başar (alpcan, tbasar)@control.csl.uiuc.edu Abstract In this paper, we develop, analyze and implement
More informationE8 TCP. Politecnico di Milano Scuola di Ingegneria Industriale e dell Informazione
E8 TP Politecnico di Milano Scuola di Ingegneria Industriale e dell Informazione Exercises o onsider the connection in the figure A =80 kbit/s τ =0 ms R =? τ =? B o o o Host A wants to know the capacity
More informationMPTCP is not Pareto-Optimal: Performance Issues and a Possible Solution
MPTCP is not Pareto-Optimal: Performance Issues and a Possible Solution Ramin Khalili, Nicolas Gast, Miroslav Popovic, Jean-Yves Le Boudec To cite this version: Ramin Khalili, Nicolas Gast, Miroslav Popovic,
More informationDistributed Systems Principles and Paradigms. Chapter 06: Synchronization
Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 06: Synchronization Version: November 16, 2009 2 / 39 Contents Chapter
More informationEmulating Low-priority Transport at the Application Layer: A Background Transfer Service
Emulating Low-priority Transport at the Application Layer: A Background Transfer Service Peter Key Microsoft Research Roger Needham Building 7 J J Thomson Avenure Cambridge, CB3 FB, UK peterkey@microsoftcom
More informationAn adaptive LQG TCP congestion controller for the Internet
Paper An adaptive LQG TCP congestion controller for the Internet Langford B White and Belinda A Chiera Abstract This paper addresses the problem of congestion control for transmission control protocol
More informationExtended Analysis of Binary Adjustment Algorithms
1 Extended Analysis of Binary Adjustment Algorithms Sergey Gorinsky Harrick Vin Technical Report TR22-39 Department of Computer Sciences The University of Texas at Austin Taylor Hall 2.124, Austin, TX
More informationThe Analysis of Microburst (Burstiness) on Virtual Switch
The Analysis of Microburst (Burstiness) on Virtual Switch Chunghan Lee Fujitsu Laboratories 09.19.2016 Copyright 2016 FUJITSU LABORATORIES LIMITED Background What is Network Function Virtualization (NFV)?
More informationNONLINEAR CONTINUOUS FEEDBACK CONTROLLERS. A Thesis SAI GANESH SITHARAMAN
NONLINEAR CONTINUOUS FEEDBACK CONTROLLERS A Thesis by SAI GANESH SITHARAMAN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree
More informationRandom Access Game. Medium Access Control Design for Wireless Networks 1. Sandip Chakraborty. Department of Computer Science and Engineering,
Random Access Game Medium Access Control Design for Wireless Networks 1 Sandip Chakraborty Department of Computer Science and Engineering, INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR October 22, 2016 1 Chen
More informationScalable Scheduling with Burst Mapping in IEEE e (Mobile) WiMAX Networks
Scalable Scheduling with Burst Mapping in IEEE 802.16e (Mobile) WiMAX Networks Mukakanya Abel Muwumba and Idris A. Rai Makerere University P.O. Box 7062 Kampala, Uganda abelmuk@gmail.com rai@cit.mak.ac.ug
More informationSingular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays.
Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays. V. Guffens 1 and G. Bastin 2 Intelligent Systems and Networks Research
More informationAn Optimal Index Policy for the Multi-Armed Bandit Problem with Re-Initializing Bandits
An Optimal Index Policy for the Multi-Armed Bandit Problem with Re-Initializing Bandits Peter Jacko YEQT III November 20, 2009 Basque Center for Applied Mathematics (BCAM), Bilbao, Spain Example: Congestion
More informationA Realistic Simulation Model for Peer-to-Peer Storage Systems
A Realistic Simulation Model for Peer-to-Peer Storage Systems Abdulhalim Dandoush INRIA Sophia Antipolis B.P. 93 06902 Sophia Antipolis France adandous@sophia.inria.fr Sara Alouf INRIA Sophia Antipolis
More informationAbstract. This paper discusses the shape of the RED drop function necessary to confirm the requirements
On the Non-Linearity of the RED Drop Function Erich Plasser, Thomas Ziegler, Peter Reichl Telecommunications Research Center Vienna Donaucity Strasse 1, 122 Vienna, Austria plasser, ziegler, reichl @ftw.at
More informationCongestion Control. Topics
Congestion Control Topics Congestion control what & why? Current congestion control algorithms TCP and UDP Ideal congestion control Resource allocation Distributed algorithms Relation current algorithms
More informationSignalling Analysis for Adaptive TCD Routing in ISL Networks *
COST 272 Packet-Oriented Service delivery via Satellite Signalling Analysis for Adaptive TCD Routing in ISL Networks * Ales Svigelj, Mihael Mohorcic, Gorazd Kandus Jozef Stefan Institute, Ljubljana, Slovenia
More informationBounded Delay for Weighted Round Robin with Burst Crediting
Bounded Delay for Weighted Round Robin with Burst Crediting Sponsor: Sprint Kert Mezger David W. Petr Technical Report TISL-0230-08 Telecommunications and Information Sciences Laboratory Department of
More informationDistributed Systems Principles and Paradigms
Distributed Systems Principles and Paradigms Chapter 6 (version April 7, 28) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.2. Tel: (2)
More informationStochastic Hybrid Systems: Applications to Communication Networks
research supported by NSF Stochastic Hybrid Systems: Applications to Communication Networks João P. Hespanha Center for Control Engineering and Computation University of California at Santa Barbara Talk
More information