Paper Review:
Congestion Avoidance and Control

Reviewer: Robert Dugas


This paper addresses the issue of how to optomize TCP in order to maximize bandwidth and efficiently deal with congestion.


The primary contribution is twofold. The first aspect is Jacobsen's identification of faults in the current (pre 1988) version of TCP. The second is the array of 7 improvements presented for new TCP.

Main Ideas

  • A robust protocol should observe 'conservation of packets'.
  • Start slow to decrease chance of initial (and then continuous) congestion.
  • Account for variance when evaluating RTT to avoid useless retransmits.
  • Employ linear increase and exponential backoff to allow for unclogging.


    Many of the ideas in this paper appear to be implementations of other ideas floating around in academia. However, realizing what to pull from where and why it needs to be used was a major step forward for internet communications

    While the paper is largely theoretical (usually referencing other paper for 'further details') the authors do present graphs at the end. These graphs analyze the retransmition rate, the round-trip time estimator, and the bandwidth usage of 8 machines at LBL and UCB sharing a 230.4 Kbps link.

    TCP V4.3 represented a great leap forward in the evolution of the protocol. Further improvements, however, have since been made and some future work the authors hint at is implementation of fairness enforcement and early congestion detection at the gateway level.

    The primarly lessons presented in this paper are the notion of packet conservation, and the necessity of slow start and exponential backoff controls.