Paper Review: TCP Vegas: End to End Congestion Avoidance on a Global Internet
Reviewer: Kenneth Chin
This paper attempts to boost the throughput of TCP Reno, through modifying the slow-start, congestion avoidance and retransmission mechanisms. TCP Vegas shows a promising and tremedous improvement in terms of throughput over its predecessor, TCP Reno. The main idea in this paper is that the degree of congestion should not only be using loss segment as an indication, but also be detected actively. However, since network is distributed and dynamic, parameters characterizing the network is not easy to obtain and even if there are some, they might not be able to accurately reflect the congestion level in the network. Moreover, it is wasting a lot of bandwidth if we let routers generate congestion information packets to the senders. In view of this, round-trip-time (RTT) is believed as the best choice to reveal the level of congestion in the network. Since RTT is calculated on the sender side, no information is needed to flow across the network and there is no clock synchronization problem. Therefore, by making use of the RTT wisely, the level of congestion in the network can be somewhat derived and appropriate actions can be done to more smoothly converge to the networks available bandwidth. The second major idea in the paper is that TCP Vegas takes care of the buffer size in the routers in order to avoid overflowing the buffers in the routers so as to avoid congestion.
There are of course other modifications in TCP Vegas that help boost up the network throughput, and they are summarized as follows:
The author conducts experiments to support his arguement. The experiments were done by simulations. Although he was trying to compare TCP Vegas to other protocols, he missed out the major rival of TCP Vegas - TCP Reno. In a practical netowrk, say the Internet, it is composed of different version of TCPs. Although TCP Vegas is backward compatible with TCP Reno and Tahoe, it is suspected that the performance of TCP Vegas is not as it is claimed in the paper or may be even worse. Moreover, there are usually multiple routers sitting along a virtual connection, it might not be able to use a single alpha-beta pair to represent all the buffer sizes in different routers. Mutliple alpha-beta pairs is definitely complicating the algorithm in some sense because TCP is dealing with end-to-end connections. Also, forward routes might be different from backward routes occassionally due to routes being not persistent enough, this further complicates the algorithm. Fairness in TCP Vegas, in face of other TCP versions, suffers as well.
- slow-start: increase (exponentially) the window size every other RTT. In between, the congestion window stays fixed so a valid comparison of the expected and actual rates can be made.
- retransmission: don't rely on the coarse-grained timeout.
Nonetheless, TCP Vegas among various versions. It is undoubted that its performace is overwhelming. This paper deserves a 4th grade.