Congestion of a TCP connection is problematic and nearly brought down Internet on October, 1986. The paper proposes a packet conservation principle that would prevent such disaster from happening in the future. The packet conservation principle says that for a connection to be in equilibrium and for a packet flow to be conservative, a new packet isn't put into the network until an old packet leaves. This paper addresses the three problems that can cause packet conservation to fail:
The paper addresses the first problem with a method called a slow-start, which basically starts and restarts a connection with a congestion window of one packet. For each packet acknowledged, the window is increased by one packet. The minimum of the congestion window and the receiver's window is sent.
To address the problem of when to retransmit a packet, the paper suggests that calculating the round-trip timing should take into account the round-trip variance. Thus, for every packet acknowledged, the round-trip time is updated, and the round-trip time-out also updated. Having a good time-out value is crucial because a big time-out does not lead to the efficient use of the bandwidth, and a small time-out can lead to more congestion.
To deal with congestion, the paper advances the idea of additive increase and multiplicative decrease of the window size. On a time-out, the window size is halved, and on an acknowledgement, the window size is increased by 1.
This paper is concise and addresses TCP congestion by addressing three problems that can lead to congestion. However, it still leaves a lot of questions out inthe open, such as why additive increase and multiplicative decrease works. Also, slow-start tends to have the problem of waiting for a time-out before it can restart. This timing sequence can lead to further congestion. I give this paper a rating of 3.