Paper review: TCP Vegas

 Reviewer: Kevin Hofstra

  1. Is there a more effective TCP protocol then TCP Reno?  Is it possible to reduce the sending rate before actually dropping packets?  What would be the ideal number of buffers taken up by a host in equilibrium?
  2. An evaluation of the advances in efficiency of the new proposed standard Vegas.  An attempt to maximize the bandwidth usage of TCP when in an uncongested environment without packets being lost.  The ideas are even more important today as internet users demand more bandwidth even though some links over their RT are still inadequate.
  3. A.  By sampling the RTT and comparing that to an average over time, the sender can determine if they are getting closer or further to congesting the network without actually having to experience packet loss.

B.     Each sending host in equilibrium should occupy 1 buffer within the routers so that in the event the amount of traffic decreases, they will have a packet to send.  They do not want to occupy more than 3 buffers because then routers with limited buffer space would be crowded out by only a few hosts.

C.      The slow start with congestion avoidance algorithm that Vegas uses is a Vegas* modification that uses the delay between ACKs to actively probe for the congestion limit without actually having to lose packets, much like its linear increase algorithm is able to do.

D.     When testing on the actual internet, Vegas outperforms Reno 92% of the time.  It does not interfere with the legacy congestion avoidance protocols.  Vegas is also considered just as fair as Reno

  1. Critique the main contribution
    • Significance- 2 The article is very effective in showing how Vegas leads to greater throughput, less packet loss, and fewer occurrences of congestion, but it is a modification of Reno and should not be considered groundbreaking.
    • Convincing- 3 They offer multiple examples and graphs to prove the superiority of Vegas over Reno.  Test cases in theory, lab simulations, and on the actual internet seems to back their argument.
  2. System researchers and builders should recognize that it is often possible to improve on the convergence and decrease the deviation of an algorithm that approaches a constant.  In the case of network congestion the host is able to see signs of congestion approaching in the form of a slower RTT.  This allows congestion to be avoided without having to reach congestion and back off.  This leads to much less packet loss, and reduces the importance of the convergence algorithms because hosts do not have to recover from large deviation.  The increase in responsiveness is achieved by the magnitude value sensing of congestion instead of an all or nothing packet loss.