TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms
- State the problem the paper is trying to solve.
The main problem the paper is trying to document four intertwined algorithms that are
contained in modern implementations of TCP.
- State the main contribution of the paper: solving a new problem, proposing a
new algorithm, or presenting a new evaluation (analysis). If a new problem, why
was the problem important? Is the problem still important today? Will the
problem be important tomorrow? If a new algorithm or new
evaluation (analysis), what are the improvements over previous algorithms or
evaluations? How do they come up with the new algorithm or evaluation?
The main contribution of the paper is that it presents four intertwined arguments in TCP
that have never been formally documented before. This is not a new problem since
documentation is very important in order for standards to propagate through the Internet
community and allow for their rapid and widespread implementation. These algorithms are
still relevant today because they are probably still incorporated into TCP to prevent
congestion and to allow faster throughput in the high traffic modern Internet. Slow start
and Congestion Avoidance were implemented due to RFC 1122 Fast Retransmit and Fast
Recovery were implemented after RFC 1122. They were developed originally Van Jacobson and
the author is documenting how they work.
Slow start is used to correct for problems that arise when a sender injects multiple
segments up to a window size advertised by the receiver into a network of different
routers and slower links. Congestion avoidance is used to deal with lost packets that
occur when data arrives on a big pipe and gets sent out of a smaller pipe or when
multiple input streams arrive at a router whose output capacity is less than
the sum of the pipes. Fast retransmit is used to indicate that a segment is lost when
three or more duplicate ACKs are received in a row, and initiates retransmission with
waiting for a retransmission timer to expire. Fast recovery is used to start congestion
avoidance after fast retransmit instead of slow start. This allow high throughput under
- Summarize the (at most) 3 key main ideas (each in 1 sentence.)
The three 3 key main ideas are: (1) Slow start works by progressively incrementing the
congestion window exponentially after successive successful ACKs are received. (2)
In contrast, congestion avoidance increments the congestion window linearly and is used to
deal with lost packets. (3) Fast Retransmit and Fast recovery deal with special cases in
which a segment is lost rather than out of order and allows the segment to retransmitted
quickly and allows normal transmission to recover quickly.
- Critique the main contribution
A criticism of the contribution is that it does not account for Security concerns that may
arise due to these new algorithms. Someone may exploit these congestion management
algorithms to purposely halt an internet sender or receiver. The fact that these are left
out are mentioned at the end of the paper.
- Rate the significance of the paper on a scale of 5
(breakthrough), 4 (significant contribution), 3 (modest contribution), 2
(incremental contribution), 1 (no contribution or negative contribution).
Explain your rating in a sentence or two.
I would give this paper a rating of 3 and see it as a modest contribution because by
outlining these four new algorithms clearly and succintly, it allows for both their rapid
implementation and also easy improvement by critics who are exposed to the implementation.
This was clearly an improvement that was needed given the congestions that warranted the
creation of the algorithms and their usefulness is seen by their rapid adoption before
even their widespread documentation.
- Rate how convincing the methodology is: how do the authors justify the solution approach
or evaluation? Do the authors use arguments, analyses, experiments, simulations, or a combination of
them? Do the claims and conclusions follow from the arguments, analyses or experiments? Are the
assumptions realistic (at the time of the research)? Are the assumptions still valid today? Are the
experiments well designed? Are there different experiments that would be more convincing? Are there
other alternatives the authors should have considered? (And, of course, is the paper free of
To be convinced of the effectiveness of these algorithms, one only needs to look at their
rapid deployment into common TCP implementations like BSD. Not only did they get
implemented before their documentation, but their widespread dispersal was caused by the
requirement for their inclusion by RFC 1122 which is evidence enough that they are very
useful. This real-world empirical evidence is enough to prove that these algorithms should
be seen as important.
- What is the most important limitation of the approach?
The limitation of this approach is that it does not compare this algorithm with other
algoritms. It assumes it is the best only because it is the most widespread. This can
frequently be the case but it is not necessarily always the case. The status quo may be
seen as a good solution, but their might be another solution that no one has considered.
Whether or not someone seeks the better solution depends on whether or not the current
solution is suitable for the demands currently put upon it.
- What lessons should researchers and builders take away from this work. What (if any)
questions does this work leave open?
The lessons researchers should take from this work is that effective algorithm development
can be developed in the real world as it is needed. It may be implemented long before it
is documented and may turn out to be a good solution by its wide acceptance before it is
carefully analyzed. The questions this work leaves open is of course, what would happen if
these algorithms were analyzed with other algorithms that may be used in their place. How
do they perform against other similar algorithms? Also, as mentioned earlier, what are the
security issues faced by the current implementations.