This paper investigates three different issues involving routing behavior on the Internet. First, it gathers statistics on multiple major routing pathologies, for example, how often they occur, whether they change, whether they improve, etc. Second, the paper examines the stability of end-to-end Internet routes based on two principles: prevalence, the eoverall likelihood that a particular route is encountered; and persistence, the likelihood that a route remains unchanged over a long period of time. Third, this paper examines the issue of routing symmetry, that is, given two points A and B on an Internet, are the route from A to B and from B to A the symmetrical?
All of these three issues highlight a major problem of the Internet - how do we measure its behavior and topology? The authors of the paper attempts to address this problem using a simple method of traceroute. The experiment used a set of 37 Internet hosts. A "network probe daemon" (NPD) is run on each host. These daemons are controlled by a central program to run traceroute on other hosts in the set. This simple model of 37 is used to represent the Internet and to perform a wide range of statistical tests.
A list of pathologies examined include routing loops, erroneous routing, connectivity altered mid-stream, fluttering, infrastructure failures, and unreachability due to too many hops. According to the paper, the frequency of the occurrence of these pathologies has increased from 1.5% to 3.4% between the end of 1994 and the end of 1995.
For routing path stability, the paper finds that about 68% of the routes remain unchanged for a few days, while a smaller percentages of routes change within 6 hours or even within 10 minutes. The degree of change varies quite a bit from intra-network routers inter-network routers.
In terms of symmetry, the paper discovers that about 30% of the time at least one different autonomous system is visited in a bi-directional route with the same end points.
All of these measurements and results underscore the difficulty if measuring the Internet's performance and topology. Because the Internet is a decentralized system of distributed networks, it is difficult to measure it. This paper uses a model of 37 hosts to address this problem. However, this model has limitations in terms of representing the Internet as a whole. I give this paper a rating of 4, because it made good statistical discoveries for a problem that is difficult to measure.