Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in
- State the problem the paper is trying to solve.
The main problem the paper is trying to analyze is congestion avoidance
algorithms, namely increase/decrease algorithms, and tries to solve the problem by
devising a solution using simple additive increase and multiplicative decrease.
- State the main contribution of the paper: solving a new problem, proposing a
new algorithm, or presenting a new evaluation (analysis). If a new problem, why
was the problem important? Is the problem still important today? Will the
problem be important tomorrow? If a new algorithm or new
evaluation (analysis), what are the improvements over previous algorithms or
evaluations? How do they come up with the new algorithm or evaluation?
The main contribution of this paper is that it documents and characterizes
abstractly the a wide class of increase/decrease algorithms and compares them using
several different performance metrics. The key metrics it defines are efficiency,
fairness, convergence time, and size of oscillations. This is a very novel analysis
because it give some way to measure the effectiveness of future algorithms that are
used to solve this general problem of congestion avoidance. It will still be
important tommorow since as new algorithms are introduced, their effectiveness and
strengths will be measure by these elegant metrics. This paper provided a way to
classify and test all such algorithms.
- Summarize the (at most) 3 key main ideas (each in 1 sentence.)
The three 3 key main ideas are:
(1) In order to satisfy the requirements of
distribued convergence to efficiency and fairness without truncation, the linear
decrease policy should be multiplicative, and the linear increase policy should
always have an additive component, and optimally it may have a multiplicative
component with the coefficient no less than one.
(2) For the linear controls with truncation, the increase and decrease policies can
each have both additive and multiplicative components.
(3) For both feasibility and optimal convergence to fairness, the increase policy
should be additive, and the decrease policy should be multiplicative.
- Critique the main contribution
One criticism of the contributions of this paper are that the problem it solves is a
generic, highly abstracted problem. In order to apply the results of the paper to
solve the decentralized congestion control prople in real networks, many additional
practical issues must be taken into consideration. In addition the paper itself,
mentions that it does not explore the effects of delayed feedback, the marginal
utility of increased bits for feedback, whether or not it is worthwhile to guess
the current number of users, and the impact of asynchronous communication. The last
point that it did not mention seems to be the most important since it is common
phenomena in such a diverse interconnected network as the Internet.
- Rate the significance of the paper on a scale of 5
(breakthrough), 4 (significant contribution), 3 (modest contribution), 2
(incremental contribution), 1 (no contribution or negative contribution).
Explain your rating in a sentence or two.
I give this paper a rating of 5 because it does a really great job of defining the
field of congestion avoidance control algorithms. It was seemingly difficult and
complex problem and the authors did a great job of analyzing the problem and
abstracting it to a level such that it would be easy to develop a solution as well
as study and test future solutions for their effectiveness. Much of their paper was
definitive, especially in terms of algorithm metrics, and this is really important
in setting up groundwork for continuing novel contributions to this field and
helping to understand the general framework of th problem.
- Rate how convincing the methodology is: how do the authors justify the solution
approach or evaluation? Do the authors use arguments, analyses, experiments, simulations, or a
combination of them? Do the claims and conclusions follow from the arguments, analyses or
experiments? Are the assumptions realistic (at the time of the research)? Are the assumptions
still valid today? Are the experiments well designed? Are there different experiments
that would be more convincing? Are there other alternatives the authors should have
considered? (And, of course, is the paper free of methodological errors.)
Their methodology was rather convincing. The authors set out first by characterizing their
problem as a congestion avoidance problem in which they need an algorithm that was
efficient, fair, converged quickly, and relatively smooth in its response. They defined
what these terms meant and how they were to be measured. They then set the four
combinations of increase/decrease algorithms using multiplication or addition against this
backdrop of metrics and compared which ones would fare the best, based first on the
criterion of which ones would converge. After doing this, they wished to identify the
subset of those algorithms that would satisfy their distributedness criterion. Finally,
they looked at the even smaller subset of those controls that would represent the optimal
trade-off of responsiveness and smoothness. They proved each of these steps sequentially
using purely mathematical proofs, and in the end deomonstrated that the additive increase
and multiplicative decrease algorithm would fare the best under all the desired criterion
they had specified earlier. Their findings are both convincing and relevant today because
of their complete reliance on mathematical proofs instead of empirical data.
- What is the most important limitation of the approach?
The key limitation of such an approach, however, is that since it is not based on
empirical data, it is hard to show the effectiveness of such an approach in real world
networks. It provides great inspiration to the design of an effective algorithm but it
does not actually do so, nor does it actually show how well it will fare in the real
world. Because of the abstracted nature of such a solution, there may be factors that were
not accounted for in the generalness of the problem, that may prevent it from being a good
solution in the real world.
- What lessons should researchers and builders take away from this work. What
(if any) questions does this work leave open?
The key lesson that researcher and builders should take away from this work is that a
simple additive increase and multiplicative decrease algorithm satisfies the sufficient
conditions for convergence to an efficient and fair state regardless of the starting state
of the network. The questions it leaves open are: How does delayed feedback affect the
control? What is the marginal utility of increase bits of feedback? Is it worthwhile to
guess the current number of users n? What is the impact of asynchronous operation?
Finally, perhaps most importantly, how well does this algorithm fare when implemented in
the real world? What is its performance when implemented on the Internet?