Paper review:
Reviewer:
Mike Liu
- State the problem the paper is trying to solve.
The main problem the paper is trying to design and evaluate an architecture for
supporting group communication applications over the Internet where all multicast
functionality is pushed to the edge.
- State the main contribution of the paper: solving a new problem, proposing a
new algorithm, or presenting a new evaluation (analysis). If a new problem, why
was the problem important? Is the problem still important today? Will the
problem be important tomorrow? If a new algorithm or new
evaluation (analysis), what are the improvements over previous algorithms or
evaluations? How do they come up with the new algorithm or evaluation?
The main contribution of this paper is that it presents an architecture called End
System Multicast and evaluates it comprehensively with respect to performance
requirements of real world requirements in a dynamic and hetergeneous environment.
This is a relatively new approach to the problem of Multicast on the Internet since
rather than trying to extend the IP multicast protocol, it tries to solve the problem
at the end systems. It also evaluates in the context of audio and video conferencing,
which have high bandwidth and low latency requirements, and will be become increasing
popular as the functionality of Internet expands. Finally, they analyze their system
in terms of adaptation to both latency and bandwidth metrics, whereas only one or the
other was considered in isolation.
- Summarize the (at most) 3 key main ideas (each in 1 sentence.)
The three 3 key main ideas are:
(1)
End System Multicast is a viable architecture for enabling performance demanding audio
and video applications in dynamic and heterogeneous Internet settings.
(2)
In order to achieve good performance for conferencing applications, it is critical to
consider both bandwidth and latency while constructing overlays.
(3)
Three issues are sparked by this paper: (a) To construct overlays optimized for
conferencing, the author employ active end-to-end measurements, and they were able to
restrict the overhead to about 10-15% for groups as large as twenty members, but the
issue remains whtere the overhead results scale to larger group sizes. (b) In the
absence of initial network information, self-organizing protocols take some time to
discover network characteristics and to converge to effecient overlays; while this may
be acceptable in conferencing application which typically have long duration, it may
become an issue in other real-time applications. (c) The author's current protocol is
design to adapt to network congestion on the time scale of tens of seconds; while
adaptation at such time scale may be acceptable when operating in less dynamic
environments, transient degradation of application performance may become an important
issue in highly dynamic environments.
- Critique the main contribution
- Rate the significance of the paper on a scale of 5
(breakthrough), 4 (significant contribution), 3 (modest contribution), 2
(incremental contribution), 1 (no contribution or negative contribution).
Explain your rating in a sentence or two.
I give this paper a rating of 4 because it presents a great initial proposed and a very
comprehensive analysis of what the factors should be in evaluating such schemes in the
context of rather demanding conferencing applications and the highly hetergeneous
nature of the real Internet, which will be especially useful in testing future system
and making their system more practically workable and acceptable for pervasive use.
- Rate how convincing the methodology is: how do the authors
justify the solution approach or evaluation? Do the authors use arguments,
analyses, experiments, simulations, or a combination of them? Do the claims
and conclusions follow from the arguments, analyses or experiments? Are the
assumptions realistic (at the time of the research)? Are the assumptions still
valid today? Are the experiments well designed? Are there different
experiments that would be more convincing? Are there other alternatives the
authors should have considered? (And, of course, is the paper free of
methodological errors.)
The authors' methodology was to present a detailed analysis of how to test a scheme
that would allow demanding conference applications to run given then extremely dynamic
and hetergeneous nature of the Internet. They use a large number of experiments
averaged together in a very detailed way to attempt to give an accurate picture of the
average performance and costs of their scheme.
- What is the most important limitation of the approach?
The biggest limitation of this approach is that since it is crafted to test the
performance of a specific type of application, in this case, audio and video
conferencing applications, it does not demonstrate the performance of the End System
Multicast in the context of the demands of other types of application which may not
have the same combination of demands for bandwidth and latency. In addition, the
authors themselves mention that in order to make their results as realistic as possible
they were not able to keep their experiments as controlled as would be desirable. The
most apparent case of this tradeoff is when the authors mentioned that ideally they
should have tested all schemes for constructing overlays concurrently, so that they
could have observed the exact same network conditions, but this was not possible since
simultaneously operating overlays would interfere with each other. Thus, they adopted
the strategy of interleaving the experiments of the various protocol schemes they
compared to eliminate the biases due to shorter time scales and running the experiments
at different times of the day to eliminate biases due to larger time scales and then
aggregating or averaging all the results. This of course is a good fix for the bias
introduced by testing the different schemes at different times of Internet traffic
conditions but again this type of realistic testing always leaves the possibilty for
various confounding factors from such a dynamic and hetergeneous setting as the
real Internet.
- What lessons should researchers and builders take away from this work. What
(if any) questions does this work leave open?
The lessons that researchers should take away is that End System Multicast is a viable
architecture for enabling performance demanding audio and video applications in
dynamic and heterogeneous Internet settings. Also, they should remember that in order
to achieve good performance for conferencing applications, it is critical to consider
both bandwidth and latency while constructing overlays. The questions the work leaves
open are what are the mechanisms for achieving shorter time scale adaptation targeted
at extremely dynamic environments, and similarly what are the mechanisms for lowering
network costs for larger sized groups.