CS276 Homework 1: ns-2

Similar documents
Wireless Internet Exercises

cs/ee/ids 143 Communication Networks

Fairness comparison of FAST TCP and TCP Vegas

Window Size. Window Size. Window Size. Time. Time. Time

Performance Analysis of Priority Queueing Schemes in Internet Routers

Min Congestion Control for High- Speed Heterogeneous Networks. JetMax: Scalable Max-Min

Impact of Cross Traffic Burstiness on the Packet-scale Paradigm An Extended Analysis

PIQI-RCP: Design and Analysis of Rate-Based Explicit Congestion Control

TCP over Cognitive Radio Channels

DIMENSIONING BANDWIDTH FOR ELASTIC TRAFFIC IN HIGH-SPEED DATA NETWORKS

A Stochastic Model for TCP with Stationary Random Losses

Capturing Network Traffic Dynamics Small Scales. Rolf Riedi

Processor Sharing Flows in the Internet

A Mathematical Model of the Skype VoIP Congestion Control Algorithm

Analysis of Scalable TCP in the presence of Markovian Losses

Understanding TCP Vegas: A Duality Model

TCP-friendly SIMD Congestion Control and Its Convergence Behavior

WiFi MAC Models David Malone

OSCILLATION AND PERIOD DOUBLING IN TCP/RED SYSTEM: ANALYSIS AND VERIFICATION

Rate Control in Communication Networks

Dynamic resource sharing

DIMENSIONING BANDWIDTH FOR ELASTIC TRAFFIC IN HIGH-SPEED DATA NETWORKS

Understanding TCP Vegas: A Duality Model

Modelling TCP with a Discrete Time Markov Chain

Stability Analysis of TCP/RED Communication Algorithms

Modelling an Isolated Compound TCP Connection

Compound TCP with Random Losses

A Different Kind of Flow Analysis. David M Nicol University of Illinois at Urbana-Champaign

Performance Effects of Two-way FAST TCP

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

IN THIS PAPER, we describe a design oriented modelling

These are special traffic patterns that create more stress on a switch

Modeling and Stability of PERT

Stability Analysis of TCP/RED Communication Algorithms

Mixed Stochastic and Event Flows

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming

Window Flow Control Systems with Random Service

NICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1

Congestion Control. Need to understand: What is congestion? How do we prevent or manage it?

Analysis of the Increase and Decrease. Congestion Avoidance in Computer Networks

2 Department of ECE, Jayaram College of Engineering and Technology, Pagalavadi, Trichy,

Compound TCP with Random Losses

Discrete-event simulations

Size-based Adaptive Bandwidth Allocation:

Running jobs on the CS research cluster

communication networks

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

Theoretical Analysis of Performances of TCP/IP Congestion Control Algorithm with Different Distances

Leopold Franzens University Innsbruck. Responding to Spurious Loss Events in TCP/IP. Master Thesis. Institute of Computer Science

Operational Laws Raj Jain

A Quantitative View: Delay, Throughput, Loss

384Y Project June 5, Stability of Congestion Control Algorithms Using Control Theory with an application to XCP

A Generalized FAST TCP Scheme

Internet Congestion Control: Equilibrium and Dynamics

A Theoretical Study of Internet Congestion Control: Equilibrium and Dynamics

Generating Random Variates II and Examples

A New Technique for Link Utilization Estimation

THE prediction of network behavior is an important task for

Concise Paper: Deconstructing MPTCP Performance

Network Optimization and Control

Impact of Queueing Delay Estimation Error on Equilibrium and Its Stability

Congestion Control In The Internet Part 1: Theory. JY Le Boudec 2018

Congestion Control. Phenomenon: when too much traffic enters into system, performance degrades excessive traffic can cause congestion

A positive systems model of TCP-like congestion control: Asymptotic results

Continuous-time hidden Markov models for network performance evaluation

A positive systems model of TCP-like congestion control: Asymptotic results

Appendix A Prototypes Models

TCP modeling in the presence of nonlinear window growth

cs/ee/ids 143 Communication Networks

Capacity management for packet-switched networks with heterogeneous sources. Linda de Jonge. Master Thesis July 29, 2009.

Information in Aloha Networks

CS 798: Homework Assignment 3 (Queueing Theory)

Utility, Fairness and Rate Allocation

A flow-based model for Internet backbone traffic

ATM VP-Based Ring Network Exclusive Video or Data Traffics

Comparison of TCP Reno and TCP Vegas via Fluid Approximation

Transient Behaviors of TCP-friendly Congestion Control Protocols

CSE 123: Computer Networks

End-to-end Estimation of the Available Bandwidth Variation Range

Rate adaptation, Congestion Control and Fairness: A Tutorial. JEAN-YVES LE BOUDEC Ecole Polytechnique Fédérale de Lausanne (EPFL)

A Utility-Based Congestion Control Scheme for Internet-Style Networks with Delay

E8 TCP. Politecnico di Milano Scuola di Ingegneria Industriale e dell Informazione

MPTCP is not Pareto-Optimal: Performance Issues and a Possible Solution

Distributed Systems Principles and Paradigms. Chapter 06: Synchronization

Emulating Low-priority Transport at the Application Layer: A Background Transfer Service

An adaptive LQG TCP congestion controller for the Internet

Extended Analysis of Binary Adjustment Algorithms

The Analysis of Microburst (Burstiness) on Virtual Switch

NONLINEAR CONTINUOUS FEEDBACK CONTROLLERS. A Thesis SAI GANESH SITHARAMAN

Random Access Game. Medium Access Control Design for Wireless Networks 1. Sandip Chakraborty. Department of Computer Science and Engineering,

Scalable Scheduling with Burst Mapping in IEEE e (Mobile) WiMAX Networks

Singular perturbation analysis of an additive increase multiplicative decrease control algorithm under time-varying buffering delays.

An Optimal Index Policy for the Multi-Armed Bandit Problem with Re-Initializing Bandits

A Realistic Simulation Model for Peer-to-Peer Storage Systems

Abstract. This paper discusses the shape of the RED drop function necessary to confirm the requirements

Congestion Control. Topics

Signalling Analysis for Adaptive TCD Routing in ISL Networks *

Bounded Delay for Weighted Round Robin with Burst Crediting

Distributed Systems Principles and Paradigms

Stochastic Hybrid Systems: Applications to Communication Networks

Transcription:

CS276 Homework 1: ns-2 Erik Peterson October 28, 2006 1 Part 1 - Fairness between TCP variants 1.1 Method After learning ns-2, I wrote a script (Listing 3) that runs a simulation of one or two tcp flows with a CBR traffic source for the given topology. The script takes a bit rate for the CBR traffic and one or two TCP types (Reno, Newreno, Vegas) for TCP flows. It configures the simulation and runs it for 15 seconds. At the end of the simulation it parses the trace file with an AWK script (Listing 4) designed to extract the loss rates and bandwidths from each flow. I then wrote a script (Listing 2) which runs a full set of CBR rates, given one or two TCP variants, by calling the first script a number of times. At the end of the script, a third script is called which generates loss and bandwidths for the set of experiments and copies the data to an archive directory. The graph script works by using sed to replace markers in a template gnuplot script file (Listing 5) with the appropriate values, and then running gnuplot. Finally, because I enjoy excessive automation, I made a meta script (Listing 1) which runs all of the desired experiments. This provides me with a single command to generate all of the appropriate graphs. 1.2 Observations 1.2.1 Bandwidth An immediate and not very interesting result is that two TCP variants of the same type will tend to operate similarly and fairly. Much more interesting is how two different TCP variants respond to increasing CBR traffic. For example, in the interaction between Reno and NewReno (Figure 1(a)), it is clear that, though they operate fairly at lower CBR bandwidths, once 1

(a) (b) Figure 1: Pairwise TCP flow bandwidth comparisons in the presence of constant bit rate traffic. (a) 1 Reno flow and 1 NewReno flow, (b) NewReno flow and 1 Vegas flow bandwidth increases and packet losses become non-negligible, NewReno takes more bandwidth than Reno. This is likely due to NewReno s Fast Recovery enhancement in the presence of partial ACKs [2]; NewReno is able to respond very quickly to occasional single packet losses. The interaction between RewReno and Vegas is quite different (Figure 1(b)). Firstly, the two start out unfair, with NewReno taking as much bandwidth as it did when pitted against Reno and Vegas taking roughly 0.1Mb/s less bandwidth. This is puzzling, especially given the fact that there ought to be more bandwidth available. Once packet losses start to occur, as the CBR Bandwidth increases, NewReno continues to act unfairly in most situations. This is due to Vegas focus on reducing packet loss [1]; it responds to NewReno s aggressive policy by backing off. As we will see in Section 1.2.2, Vegas packet loss is superior in many cases. NewReno is more aggressive, with the downside of losing more packets when it exceeds its available bandwidth. The behavior of each TCP variant can be seen more clearly when they are run one at a time with the CBR traffic (Figure 2). In the presence of congestion, NewReno is clearly able to make use of higher bandwidth than Reno. Reno, too, tends to drop off precipitously while NewReno is smoother. Both Reno and NewReno achieve nearzero bandwidth with the CBR Bandwidth reaches the capacity of the link. Vegas 2

(a) (b) Figure 2: Single TCP flow bandwidth in the presence of constant bit rate traffic. (a) 1 Reno Flow (NewReno flow very similar), (b) 1 Vegas flow improves on NewReno by having a much smoother transition from full utilization to congestion. This is due to its predictive congestion avoidance mechanism [1]. Also of note is that, strangely, Vegas appears to achieve some non-trivial bandwidth, even in the face of CBR traffic above the link capacity. While I suspect that this is an artifact of my measurement process, it could be that Vegas determines some tiny packet send rate that is able to steal back some bandwidth from the CBR traffic. In all of these graphs, it is important to note that the graphs are flat for low CBR Bandwidth. This is because TCP is not given sufficient time to reach full utilization. Re-running small portions of the simulation for 20 seconds instead of 15 seconds confirms that this is the case. 1.2.2 Packet Loss The packet loss characteristics of Reno and NewReno are very similar. At somewhere around 6Mb/s CBR Bandwidth, both begin to experience non-trivial packet loss which increases exponentially (Figure 3(a)(c)), whether they are competing against each other or competing only against CBR traffic. NewReno, with its more aggressive retransmit scheme, tends to have more packet loss than plain Reno. Vegas performs quite differently. Since Vegas is optimized to avoid packet loss, it performs with near-zero packet loss until about 8Mb/s CBR Bandwidth, even when 3

(a) (b) Figure 3: Pairwise TCP flow packet loss comparisons in the presence of constant bit rate traffic. (a) 2 Reno flows, (b) 2 Vegas flows there are two Vegas flows competing. Unlike the Reno variants, Vegas was lowering its bandwidth without significantly increasing packet loss; in fact, comparing the bandwidth and packet loss traces of Vegas, it is clear that packet loss and bandwidth drop are not as closely correlated as they are in the Reno variants.. Even at the point where the CBR traffic fully loads the link, Vegas still maintains much lower packet losses, only because it becomes much more selective about when it sends packets. 2 Part 2 - Influence of Queuing 2.1 Method My method for part 2 followed closely with my method for part 1. I ran two different classes of experiments, one pitting a TCP flow against a UDP flow with different queuing disciplines, and another pitting three UDP flows against each other with different queuing disciplines. Again, I wrote a script for each experiment to perform the NS simulation (Listings 7, 11), render down the data into summary form (Listings 8,12), and set off the whole process (Listings 6, 10); template gnuplot scripts (Listings 9, 13) provide the graphs. The only new particularly new part in my methodology for Part 2 was the end-to-end latency calculation. While there is likely a more space-efficient way to do this calculation, I took advantage of awk s 4

arrays and store the send time for every packet. Then when the script encounters a packet receipt it can directly calculate the latency. Every other scheme that I tried resulted in poor results. 2.2 Observations 2.2.1 TCP and UDP flows (a) (b) Figure 4: TCP Sack and a UDP flow under different queuing disciplines. (a) Drop- Tail, (b) RED The first set of observations deals with the effect of changing the queuing discipline on a TCP and a UDP flow competing for bandwidth. The Reno and Sack flows respond in a similar fashion to the change of queuing discipline. In both cases the TCP uses all of the bandwidth until the UDP flow starts, at which point the TCP flow is overwhelmed (Figure 4). An interesting side-effect of using RED is that the total throughput becomes more variable. This is because it is possible, with RED, that packets are dropped before congestion occurs (in fact, that is the point) [3]. For a connection with constant congestion, there will be many more, and earlier, drops than a simpler queuing scheme, so total throughput will drop. A related result is that the TCP flow will be forced into congestion control more often as the UDP flow will constantly take up as much bandwidth as possible. This 5

is why, between the graphs of RED and DropTail, one can see that the TCP flow oscillates significantly more under RED. Compared to Reno, SACK responds much better to the congestion. Though its throughput is continuously oscillating, it never drops below a certain level, because SACK generally recovers from packet loss at a rate faster than slow start[2]. 2.2.2 3 UDP flows (a) (b) Figure 5: Throughput of 3 UDP flows under different queuing disciplines. (a) Drop- Tail, (b) RED When multiple UDP flows are the only traffic competing for bandwidth, the effect of different queuing disciplines is quite different. For one, because of the lack of congestion control in the protocol, the total throughput is always fairly close to the maximum for the bottleneck link. The effect on the throughput of the individual flows is also worth noting. In the DropTail case (Figure 5(a)), it is clear that the scheme is not fair. Flows are favored at different times (Flow 3 at time 2, then Flow 1 at time 3), due entirely to which flow is unlucky enough to have its packets at the tail of the queue when dropping occurs. The flow receiving the greatest bandwidth is often receiving twice as much as the least, and the share for each flow appears to be oscillating slowly throughout the experiment. When using RED (Figure 5(b)), the scheme becomes much more fair. It is easy to 6

see that RED s random drop scheme results in the same average bandwidth for each flow. Unfortunately, it also results in significantly higher variance in the throughput of a particular flow. Additional insight can be gleaned from looking at the end-to-end latency of each flow. (a) (b) Figure 6: Latency of 3 UDP flows under different queuing disciplines. (a) DropTail, (b) RED The difference between the end-to-end latency for the two queuing disciplines is quite illuminating. In the case of DropTail (Figure 6(a)), the latency quickly rises to an average level of 225ms for each of the flows, and then oscillates around this level for the duration of the experiment. This is easily correlated to the length of the queue at any given time. DropTail has a threshold for its queue length, and the only control done by the router is to drop packets which arrive when the queue is filled to the threshold. This results in a steady, long queue. RED on the other hand approaches queue management in a smarter way. It applies control (i.e. packet drops) proportionally to the size of the queue, beyond some threshold. The end effect of this type of control is a queue size that oscillates around a smaller length (Figure 6(b)). This smaller queue size results in much better latency for all UDP flows, less than 25% of the latency seen with DropTail. 7

A Source Listings #!/ usr / bin / t c l s h Listing 1: Meta script to automate all of part 1 exec hw1 part1. t c l Reno Reno exec hw1 part1. t c l Newreno Reno exec hw1 part1. t c l Vegas Vegas exec hw1 part1. t c l Newreno Vegas exec hw1 part1. t c l Reno exec hw1 part1. t c l Newreno exec hw1 part1. t c l Vegas Listing 2: Main script to execute one pair of agents or one single agent #!/ usr / bin / t c l s h # check command l i n e i f { $argc < 1 $argc > 2 { puts Command Lin Usage : hw1 part1. t c l agent0 [ agent1 ] e x i t 1 s e t agent0 [ l i n d e x $argv 0 ] # run the cbr r a t e s with one or two tcp flows, then make the graphs and a r c h i v e the data i f { $argc == 2 { s e t agent1 [ l i n d e x $argv 1 ] e x e c echo CBR Loss CBR Avg Bandwidth TCP0 Loss TCP0 Avg Bandwidth TCP1 Loss TCP1 Avg Bandwidth > hw1 part1 $ { agent0 ${ agent1. data f o r { s e t i 0.5 { $ i <= 11.0 { s e t i [ expr { $ i + 0. 2 5 ] { exec ns hw1 part1 onerate. t c l ${ i Mb $agent0 $agent1 exec hw1 part1 makegraphs. t c l $agent0 $agent1 exec mv hw1 part1 $ { agent0 ${ agent1. data. / data / 8

e l s e { e x e c echo CBR Loss CBR Avg Bandwidth TCP0 Loss TCP0 Avg Bandwidth > hw1 part1 $ { agent0. data f o r { s e t i 0.5 { $ i <= 11.0 { s e t i [ expr { $ i + 0. 2 5 ] { exec ns hw1 part1 onerate. t c l ${ i Mb $agent0 exec hw1 part1 makegraphs. t c l $agent0 exec mv hw1 part1 $ { agent0. data. / data / exec rm hw1. out Listing 3: Script to run ns based on a single rate and pair of agents # at the end, c l o s e the f i l e and run the summary awk s c r i p t proc f i n i s h { { g l o b a l ns nf agent0 agent1 argc $ns f l u s h t r a c e c l o s e $nf i f { $argc == 3 { e x e c awk f hw1 part1 twographs summary. awk hw1. out >> hw1 part1 $ { agent0 ${ agent1. data e l s e i f { $argc == 2 { e x e c awk f hw1 part1 onegraph summary. awk hw1. out >> hw1 part1 $ { agent0. data e x i t 0 # check the command l i n e i f { $argc < 2 $argc > 3 { puts Command Line Usage : ns hw1 part1 onerate. t c l c b r r a t e agent0 [ agent1 ] e x i t 1 s e t c b r r a t e [ l i n d e x $argv 0 ] s e t agent0 [ l i n d e x $argv 1 ] i f { $argc == 3 { 9

s e t agent1 [ l i n d e x $argv 2 ] e l s e { s e t agent1 #c o n f i g u r e ns s e t ns [ new Simulator ] s e t nf [ open hw1. out w] $ns trace a l l $nf s e t n1 [ $ns node ] s e t n2 [ $ns node ] s e t n3 [ $ns node ] s e t n4 [ $ns node ] s e t n5 [ $ns node ] s e t n6 [ $ns node ] $ns duplex l i n k $n1 $n2 10Mb 10ms DropTail $ns duplex l i n k $n5 $n2 10Mb 10ms DropTail $ns duplex l i n k $n2 $n3 10Mb 10ms DropTail $ns duplex l i n k $n3 $n4 10Mb 10ms DropTail $ns duplex l i n k $n3 $n6 10Mb 10ms DropTail # CBR s ource c o n f i g s e t c b r s o u r c e [ new Agent/UDP] $ns attach agent $n2 $ c b r s o u r c e s e t c b r t r a f f i c [ new A p p l i c a t i o n / T r a f f i c /CBR] $ c b r t r a f f i c s e t p a c k e t S i z e 100 $ c b r t r a f f i c s e t r a t e $ c b r r a t e $ c b r t r a f f i c attach agent $ c b r s o u r c e s e t c b r s i n k [ new Agent/ Null ] $ns attach agent $n3 $ c b r s i n k # TCP flow #1 c o n f i g s e t t c p 0 s o u r c e [ new Agent/TCP/ $agent0 ] $ t c p 0 s o u r c e s e t f i d 0 $ns attach agent $n1 $ t c p 0 s o u r c e 10

s e t t c p 0 s i n k [ new Agent/TCPSink ] $ns attach agent $n4 $ t c p 0 s i n k s e t t c p 0 t r a f f i c [ new A p p l i c a t i o n /FTP] $ t c p 0 t r a f f i c attach agent $ t c p 0 s o u r c e # TCP flow #2 c o n f i g ( i f a p p l i c a b l e ) i f { $argc == 3 { s e t t c p 1 s o u r c e [ new Agent/TCP/ $agent1 ] $ t c p 1 s o u r c e s e t f i d 1 $ns attach agent $n5 $ t c p 1 s o u r c e s e t t c p 1 s i n k [ new Agent/TCPSink ] $ns attach agent $n6 $ t c p 1 s i n k s e t t c p 1 t r a f f i c [ new A p p l i c a t i o n /FTP] $ t c p 1 t r a f f i c attach agent $ t c p 1 s o u r c e $ns connect $ t c p 1 s o u r c e $ t c p 1 s i n k $ns connect $ c b r s o u r c e $ c b r s i n k $ns connect $ t c p 0 s o u r c e $ t c p 0 s i n k # s c hedule e v e r y t h i n g $ns at 0. 0 $ c b r t r a f f i c s t a r t $ns at 0. 0 $ t c p 0 t r a f f i c s t a r t i f { $argc == 3 { $ns at 0. 0 $ t c p 1 t r a f f i c s t a r t $ns at 15.0 $ c b r t r a f f i c stop $ns at 15.0 $ t c p 0 t r a f f i c stop i f { $argc == 3 { $ns at 15.0 $ t c p 1 t r a f f i c stop 11

$ns at 15.0 f i n i s h # GO! $ns run Listing 4: AWK script to generate one line of bandwidths and loss rates based on one run of the previous script # e x t r a c t bandwidth and l o s s r a t e / r. tcp / { i f ( $8 == 0 && $4 == 3) { b y t e s r e c e i v e d 0 += $6 e l s e i f ( $8 == 1 && $4 == 5) { b y t e s r e c e i v e d 1 += $6 /\+. tcp / { i f ( $8 == 0 && $3 == 0) { b y t e s s e n t 0 += $6 e l s e i f ( $8 == 1 && $3 == 4) { b y t e s s e n t 1 += $6 /\. cbr / { c b r r e c e i v e d += $6 /\+. cbr / { c b r s e n t += $6 END { p r i n t f ( % f %f %f %f %f %f \n, ( ( c b r s e n t c b r r e c e i v e d ) / c b r s e n t ) 100, ( c b r s e n t 8. 0 / 1 5. 0 ) / 1000000.0, ( ( b y t e s s e n t 0 b y t e s r e c e i v e d 0 ) / b y t e s s e n t 0 ) 100, ( b y t e s s e n t 0 8. 0 / 1 5. 0 ) / 1000000.0, ( ( b y t e s s e n t 1 b y t e s r e c e i v e d 1 ) / b y t e s s e n t 1 ) 100, ( b y t e s s e n t 1 8. 0 / 1 5. 0 ) / 1000000.0) ; Listing 5: Template gnuplot file s e t output graphs /hw1 part1 AGENT1AGENT2 loss. png s e t terminal png s e t x l a b e l CBR Bandwidth (Mb/ s ) s e t data s t y l e l i n e s s e t key o u t s i d e s e t y l a b e l Loss Rate (%) 12

s e t yrange [ 0 : 1 0 0 ] p l o t hw1 part1 AGENT1AGENT2. data using 2 : 3 t i t l e AGENT1 Loss Rate, \ hw1 part1 AGENT1AGENT2. data using 2 : 5 t i t l e AGENT2 Loss Rate, \ hw1 part1 AGENT1AGENT2. data using 2 : 1 t i t l e CBR Loss Rate s e t output graphs /hw1 part1 AGENT1AGENT2 bw. png s e t y l a b e l Average Bandwidth (Mb/ s ) s e t yrange [ 0 : 3 ] p l o t hw1 part1 AGENT1AGENT2. data using 2 : 4 t i t l e AGENT1 Avg Bandwidth, \ hw1 part1 AGENT1AGENT2. data using 2 : 6 t i t l e AGENT2 Avg Bandwidth #!/ usr / bin / t c l s h Listing 6: TCP/UDP Queuing Meta Tcl script exec ns hw1 part2 1. t c l Reno DropTail exec ns hw1 part2 1. t c l Reno RED exec ns hw1 part2 1. t c l Sack1 DropTail exec ns hw1 part2 1. t c l Sack1 RED Listing 7: TCP/UDP Queuing NS Tcl script # summarize the data and g e nerate the p l o t s proc f i n i s h { { g l o b a l ns nf agent queuing $ns f l u s h t r a c e c l o s e $nf e x e c echo Time CBR Throughput TCP Throughput CBR Loss TCP Loss Total Throughput > hw1 part2 1 $ { agent ${ queuing. data e x e c awk f hw1 part2 1 summary. awk hw1. out >> hw1 part2 1 $ { agent ${ queuing. data exec cat hw1 part2 1 graph template \ 13

sed s /AGENT/${ agent /g \ sed s /QUEUING/${ queuing /g \ gnuplot exec mv hw1 part2 1 $ { agent ${ queuing. data data / exec rm hw1. out e x i t 0 # p r o c e s s the command l i n e i f { $argc < 2 $argc > 2 { puts Command Line Usage : ns hw1 part2 1. t c l agent queuing e x i t 1 s e t agent [ l i n d e x $argv 0 ] s e t queuing [ l i n d e x $argv 1 ] # c o n f i g u r e ns s e t ns [ new Simulator ] s e t nf [ open hw1. out w] $ns trace a l l $nf s e t n1 [ $ns node ] s e t n2 [ $ns node ] s e t n3 [ $ns node ] s e t n4 [ $ns node ] s e t n5 [ $ns node ] s e t n6 [ $ns node ] $ns duplex l i n k $n1 $n2 10Mb 10ms DropTail $ns duplex l i n k $n5 $n2 10Mb 10ms DropTail $ns duplex l i n k $n2 $n3 1. 5Mb 10ms $queuing $ns duplex l i n k $n3 $n4 10Mb 10ms DropTail $ns duplex l i n k $n3 $n6 10Mb 10ms DropTail # CBR s ource s e t c b r s o u r c e [ new Agent/UDP] $ns attach agent $n5 $ c b r s o u r c e 14

s e t c b r t r a f f i c [ new A p p l i c a t i o n / T r a f f i c /CBR] $ c b r t r a f f i c s e t p a c k e t S i z e 500 $ c b r t r a f f i c s e t r a t e 1. 0Mbs $ c b r t r a f f i c attach agent $ c b r s o u r c e s e t c b r s i n k [ new Agent/ Null ] $ns attach agent $n6 $ c b r s i n k # TCP s ource s e t t c p 0 s o u r c e [ new Agent/TCP/ $agent ] $ t c p 0 s o u r c e s e t f i d 0 $ns attach agent $n1 $ t c p 0 s o u r c e i f { $agent == Sack1 { s e t t c p 0 s i n k [ new Agent/TCPSink/ Sack1 ] e l s e { s e t t c p 0 s i n k [ new Agent/TCPSink ] $ns attach agent $n4 $ t c p 0 s i n k s e t t c p 0 t r a f f i c [ new A p p l i c a t i o n /FTP] $ t c p 0 t r a f f i c s e t p a c k e t S i z e 1000 $ t c p 0 t r a f f i c attach agent $ t c p 0 s o u r c e $ns connect $ c b r s o u r c e $ c b r s i n k $ns connect $ t c p 0 s o u r c e $ t c p 0 s i n k # s c hedule e v e r y t h i n g... $ns at 0. 0 $ t c p 0 t r a f f i c s t a r t $ns at 2. 0 $ c b r t r a f f i c s t a r t $ns at 15.0 $ c b r t r a f f i c stop $ns at 15.0 $ t c p 0 t r a f f i c stop $ns at 15.0 f i n i s h 15

# go! $ns run Listing 8: TCP/UDP Queuing Summary AWK script # big, nasty awk s c r i p t to e x t r a c t l o s s r a t e and throughput by time f u n c t i o n r e s e t c o u n t e r s ( ) { c b r s e n t = 0 ; c b r r e c e i v e d = 0 ; b y t e s r e c e i v e d = 0 ; b y t e s s e n t = 0 ; f u n c t i o n p r i n t l i n e ( ) { c b r l o s s = ( c b r s e n t > 0)? ( ( ( c b r s e n t c b r r e c e i v e d ) / c b r s e n t ) 1 0 0. 0 ) : ( 0 ) ; b y t e s l o s s = ( b y t e s s e n t > 0)? ( ( ( b y t e s s e n t b y t e s r e c e i v e d ) / b y t e s s e n t ) 1 0 0. 0 ) : ( 0 ) ; cbr throughput = ( c b r r e c e i v e d 8. 0 / t i m e i n c r ) / 1 0 0 0. 0 ; bytes throughput = ( b y t e s r e c e i v e d 8. 0 / t i m e i n c r ) / 1 0 0 0. 0 ; p r i n t f ( % f %f %f %f %f %f \n, next time, cbr throughput, bytes throughput, c b r l o s s, b y t e s l o s s, cbr throughput + bytes throughput ) ; r e s e t c o u n t e r s ( ) ; next time = t i m e i n c r + next time 16

BEGIN { t i m e i n c r = 0. 2 5 ; max time = 1 5. 0 ; r e s e t c o u n t e r s ( ) ; next time = t i m e i n c r ; / r. tcp / { / r. cbr / { /\+. tcp / { /\+. cbr / { i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $4 == 3) { b y t e s r e c e i v e d += $6 i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $4 == 5) { c b r r e c e i v e d += $6 i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $3 == 0) { b y t e s s e n t += $6 ; i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $3 == 4) { c b r s e n t += $6 ; END { i f ( next time <= max time ) { p r i n t l i n e ( ) ; Listing 9: TCP/UDP Queuing gnuplot graph template s e t output graphs /hw1 part2 1 AGENTQUEUING throughput. png s e t terminal png 17

s e t x l a b e l Time ( s ) s e t data s t y l e l i n e s s e t key o u t s i d e s e t y l a b e l Throughput ( Kbps ) s e t yrange [ 0 : 1 7 5 0 ] p l o t hw1 part2 1 AGENTQUEUING. data using 1 : 3 t i t l e AGENT Throughput, \ hw1 part2 1 AGENTQUEUING. data using 1 : 2 t i t l e CBR Throughput, \ hw1 part2 1 AGENTQUEUING. data using 1 : 6 t i t l e Total Throughput s e t output graphs /hw1 part2 1 AGENTQUEUING loss. png s e t y l a b e l Packet Loss (%) s e t yrange [ 0 : 1 0 0 ] p l o t hw1 part2 1 AGENTQUEUING. data using 1 : 5 t i t l e AGENT Loss Rate, \ hw1 part2 1 AGENTQUEUING. data using 1 : 4 t i t l e CBR Loss Rate #!/ usr / bin / t c l s h exec ns hw1 part2 2. t c l DropTail exec ns hw1 part2 2. t c l RED Listing 10: UDP Queuing Meta Tcl script Listing 11: UDP Queuing NS Tcl script # summarize the data, g e nerate graphs proc f i n i s h { { g l o b a l ns nf queuing $ns f l u s h t r a c e c l o s e $nf e x e c echo Time UDP1 Throughput UDP2 Throughput UDP3 Throughput Total Throughput UDP1 Latency 18

UDP2 Latency UDP3 Latency > hw1 part2 2 $ { queuing. data e x e c awk f hw1 part2 2 summary. awk hw1. out >> hw1 part2 2 $ { queuing. data exec cat hw1 part2 2 graph template \ sed s /QUEUING/${ queuing /g \ gnuplot exec mv hw1 part2 2 $ { queuing. data data / #exec rm hw1. out e x i t 0 #p r o c e s s the command l i n e i f { $argc < 1 $argc > 2 { puts Command Line Usage : ns hw1 part2 2. t c l queuing e x i t 1 s e t queuing [ l i n d e x $argv 0 ] # c o n f i g u r e ns s e t ns [ new Simulator ] s e t nf [ open hw1. out w] $ns trace a l l $nf s e t n1 [ $ns node ] s e t n2 [ $ns node ] $ns duplex l i n k $n1 $n2 1. 5Mbp 10ms $queuing s e t p a c k e t s i z e s ( 0 ) 1000 s e t p a c k e t s i z e s ( 1 ) 1000 s e t p a c k e t s i z e s ( 2 ) 500 s e t r a t e s ( 0 ) 1Mpbs s e t r a t e s ( 1 ) 1Mbps s e t r a t e s ( 2 ) 0. 6 Mbps # c o n f i g u r e t h r e e CBR t r a f f i c s o u r c e s and s i n k s f o r { s e t i 0 { $ i < 3 { i n c r i { 19

s e t c b r s o u r c e ( $ i ) [ new Agent/UDP] $ c b r s o u r c e ( $ i ) s e t f i d $ i $ns attach agent $n1 $ c b r s o u r c e ( $ i ) s e t c b r t r a f f i c ( $ i ) [ new A p p l i c a t i o n / T r a f f i c /CBR] $ c b r t r a f f i c ( $ i ) s e t p a c k e t S i z e $ p a c k e t s i z e s ( $ i ) $ c b r t r a f f i c ( $ i ) s e t r a t e $ r a t e s ( $ i ) $ c b r t r a f f i c ( $ i ) attach agent $ c b r s o u r c e ( $ i ) s e t c b r s i n k ( $ i ) [ new Agent/ Null ] $ns attach agent $n2 $ c b r s i n k ( $ i ) $ns connect $ c b r s o u r c e ( $ i ) $ c b r s i n k ( $ i ) # s c hedule everyone $ns at 0. 0 $ c b r t r a f f i c ( 0 ) s t a r t $ns at 0. 1 $ c b r t r a f f i c ( 1 ) s t a r t $ns at 0. 2 $ c b r t r a f f i c ( 2 ) s t a r t $ns at 5. 0 $ c b r t r a f f i c ( 0 ) stop $ns at 5. 0 $ c b r t r a f f i c ( 1 ) stop $ns at 5. 0 $ c b r t r a f f i c ( 2 ) stop $ns at 5. 0 f i n i s h # and go! $ns run Listing 12: UDP Queuing Summary AWK script # awk s c r i p t to e x t r a c t throughput and l a t e n c y f u n c t i o n r e s e t c o u n t e r s ( ) { b y t e s s [ 0 ] = 0 ; b y t e s s [ 1 ] = 0 ; b y t e s s [ 2 ] = 0 ; b y t e s r [ 0 ] = 0 ; b y t e s r [ 1 ] = 0 ; b y t e s r [ 2 ] = 0 ; 20

t i m e s s [ 0 ] = 0 ; t i m e s s [ 1 ] = 0 ; t i m e s s [ 2 ] = 0 ; t i m e s r [ 0 ] = 0 ; t i m e s r [ 1 ] = 0 ; t i m e s r [ 2 ] = 0 ; pack r [ 0 ] = 0 ; pack r [ 1 ] = 0 ; pack r [ 2 ] = 0 ; f u n c t i o n p r i n t l i n e ( ) { f o r ( i =0; i <3; i ++) { throughput [ i ] = ( b y t e s r [ i ] 8. 0 / t i m e i n c r ) / 1 0 0 0. 0 ; i f ( pack r [ i ]!= 0) { l a t e n c y [ i ] = 1000 ( t i m e s r [ i ] ) / pack r [ i ] ; e l s e { l a t e n c y [ i ] = 0 ; p r i n t f ( % f %f %f %f %f %f %f %f \n, next time, throughput [ 0 ], throughput [ 1 ], throughput [ 2 ], throughput [ 0 ] + throughput [ 1 ] + throughput [ 2 ], l a t e n c y [ 0 ], l a t e n c y [ 1 ], l a t e n c y [ 2 ] ) ; r e s e t c o u n t e r s ( ) ; 21

next time = t i m e i n c r + next time BEGIN { t i m e i n c r = 0. 1 ; max time = 5. 0 ; r e s e t c o u n t e r s ( ) ; next time = t i m e i n c r ; / r. cbr / { /\+. cbr / { i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $4 == 1) { b y t e s r [ $8 ] += $6 ; t i m e s r [ $8 ] += $2 t i m e s s [ $12 ] ; pack r [ $8 ]++; i f ( $2 >= next time ) { p r i n t l i n e ( ) ; i f ( $3 == 0) { b y t e s s [ $8 ] += $6 ; t i m e s s [ $12 ] = $2 ; pack s [ $8]++ END { i f ( next time <= max time ) { p r i n t l i n e ( ) ; 22

Listing 13: UDP Queuing gnuplot graph template s e t output graphs /hw1 part2 2 QUEUING throughput. png s e t terminal png s e t x l a b e l Time ( s ) s e t data s t y l e l i n e s s e t key o u t s i d e s e t y l a b e l Throughput ( Kbps ) s e t yrange [ 0 : 2 0 0 0 ] p l o t hw1 part2 2 QUEUING. data using 1 : 2 t i t l e UDP Flow 1 Throughput, \ hw1 part2 2 QUEUING. data using 1 : 3 t i t l e UDP Flow 2 Throughput, \ hw1 part2 2 QUEUING. data using 1 : 4 t i t l e UDP Flow 3 Throughput, \ hw1 part2 2 QUEUING. data using 1 : 5 t i t l e Total Throughput s e t output graphs /hw1 part2 2 QUEUING delay. png s e t y l a b e l Latency (ms) s e t yrange [ 0 : 2 5 0 ] p l o t hw1 part2 2 QUEUING. data using 1 : 6 t i t l e UDP Flow 1 Latency, \ hw1 part2 2 QUEUING. data using 1 : 7 t i t l e UDP Flow 2 Latency, \ hw1 part2 2 QUEUING. data using 1 : 8 t i t l e UDP Flow 3 Latency 23

References [1] L. S. Brakmo, S. W. O Malley, and L. L. Peterson. TCP vegas: New techniques for congestion detection and avoidance. In SIGCOMM, pages 24 35, 1994. [2] K. Fall and S. Floyd. Simulation-based comparisons of Tahoe, Reno and SACK TCP. Computer Communication Review, 26(3):5 21, July 1996. [3] S. Floyd and V. Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on Networking, 1(4):397 413, August 1993. 24