Paper review : TCP Congestion Control with a Misbehaving Receiver (SCWA99)
Reviewer : Hai Fang (hfang@acm.org)
- Goal
To explore the possiblity that a misbehaving receiver drive a standard TCP sender
faster, without losing end-to-end reliability.
- Contribution
This paper describes the scenario that the "interest seperated" clients misbehave
and benefit from the current TCP congestin control machanisms. This aspect is
likely overlooked by the protocol designers who care more about the end-to-end
reliability. The solutions proposed in the paper is clean. The experiment against
the live websites depicts the pragmatic significance of this problem.
- Main ideas
- TCP, which was originally designed for a cooperative environment, contains
several vulnerabilities that an unscrupulous receiver can exploit to obtain
improved service at the expense of other network clients or to implement a DoS
attack.
- The design of TCP can be modified, without changing the nature of the
congestion control function, to eliminate these vulnerabilities.
- Evaluation
- Significance rating: 3
TCP congestion control machanism affect the Internet users more and more with
the expansion of the Internet. To provide a fairness besides the reliable
communication is an important topic in the future development of TCP.
- Convincing rating
The authors analysis the design of the TCP protocols and the congestion control
machanisms, and find several bugs wrt the fairness of the different clients;
they also do some experiments against the live Web severs to prove the
feasibility of their attack. Three principles are cited as the criteria of
the safer designing, hence the authors propose the fix schemes for those
deficits.
- Limitation
Although the authors found the potential problem for TCP congestion control, and
designed some delicate attacks, their method is still somehow ad hoc. An interesting
question to me is that: can we formalize the TCP protocols, including the congestion
control part, and systematically check the properties we care about? If the answer
is yes, maybe the most difficult part in the verification is that some properties,
such as the quantitive parameters, is not dealt with easily, by the checker.
- Conclusion
Many protocols/proposals are designed for simplicity and efficiency, with the further
development, the restriction of backward compatibility often leads to some subtle
defects. This kind of problems are liable to be overlooked by the designers of
the new proposals/extensions. Formalize the semantics of the protocols and the
related properties may help to solve these problems in some degree.
10/10/01