Paper review: <
Quality Adaptation for Congestion Controlled Video Playback over the
Reviewer: <Ryan Gehl>
- State the problem the paper is trying to solve.
The purpose of this paper is to present a mechanism for using layered
video in the context of unicast congestion control.
- State the main contribution of the paper: solving a new problem,
proposing a new algorithm, or presenting a new evaluation (analysis). If a
new problem, why was the problem important? Is the problem still
important today? Will the problem be important tomorrow? If a new
algorithm or new evaluation (analysis), what are the improvements over
previous algorithms or evaluations? How do they come up with the new
algorithm or evaluation?
The main contribution of this paper is to describe a method known as
quality adaptation to adjust the quality of the playback stream so that
the perceived quality is as high as the available network bandwidth will
- Summarize the (at most) 3 key main ideas (each in 1
(1) Quality adaptation allows the server to adjust the quality of the
playback stream so that the perceived quality is as high as the available
network bandwidth will permit. This is done using a layered scheme.
(2) With a small amount of buffering, the mechanism can efficiently
cope with short-term changes in bandwidth due to AIMD congestion
- Critique the main contribution
- Rate the significance of the paper on a scale of 5
(breakthrough), 4 (significant contribution), 3 (modest contribution), 2
(incremental contribution), 1 (no contribution or negative contribution).
Explain your rating in a sentence or two.
I would rate this paper as a 4 because it not only provides as effective,
_deployable_ solution to a growing problem on the internet, it also
raises several questions for future areas of research.
- Rate how convincing the methodology is: how do the authors
justify the solution approach or evaluation? Do the authors use arguments,
analyses, experiments, simulations, or a combination of them? Do the
claims and conclusions follow from the arguments, analyses or experiments?
Are the assumptions realistic (at the time of the research)? Are the
assumptions still valid today? Are the experiments well designed? Are
there different experiments that would be more convincing? Are there other
alternatives the authors should have considered? (And, of course, is the
paper free of methodological errors.)
By presenting an algebraic justification and a simulation for their
technique, I found their approach to be convincing. I felt like a
strength of the authors approach was that they did not make any
assumptions about loss patterns or available bandwidth.
- What is the most important limitation of the approach?
One limitation of the approach is that it is unclear to me why users
would opt for a non-greedy strategy when they could receive a higher
quality transmission with a slight delay (the paper assumes that a large
delay is unacceptable).
- What lessons should researchers and builders take away from this
work. What (if any) questions does this work leave open?
One lesson to take away from this work is that smoothness can be achieved
in an inherently bursty environment by setting up a producer/consumer
model where times of excess bandwidth fill the buffer and times of sparce
bandwidth drain it. Maybe this line of thinking can be applied to other