This paper addresses the problem of how to combine congestion control mechanisms with real-time streaming data over the Internet. The authors offers a mechanism called hierachical encoding that allows the media server to sent layered encodings for each stream. More layers can be transmitted as bandwidth increases, and fewer layers are transmitted as bandwidth decreases. This mechanism works with any AIMD TCP-friendly congestion control mechanisms.
The core of this paper focuses on two issues. On the server side, the sender needs to know how and when to add a layer when bandwidth increases and how and when to drop a layer when bandwidth decreases. On the client side, the receiver needs to know how much of each layer to buffer in order to provide smooth transitions between adding and dropping a layer, so that the number of layers don't fluctuate as quickly as the congestion behavior of the network. Here are some important ideas that the paper proposes:
I give this paper a rating of 3 - modest contribution. This paper attempts to provide a good solution to the streaming media and congestion control problem. The authors have some really good ideas, and the equations and algorithm provided give a good foundation for further research in this area.
There are, ofcourse, problems with this paper. For example, the second criterion required for the sender to determine whether to add a layer makes the assumption that the sender knows the receiver's consumpation rate and buffering state. Is this information necessarily obtainable? If it's sent over the network, is it guaranteed to arrive to the sender and not be outdated? How often should this information be transmitted to the sender? Also, the smoothing model is problematic because Kmax is a constant. The Internet state changes all the time, and what is a good number to set for Kmax? Also, the bigger Kmax, the more space is needed for buffering on the client side. Do we have infinite space to do so? As always, there is the problem of the simulation not being an accurate enough model to test the "real" Internet. And there is really no good way to model after the Internet with 100% accuracy.