Project Suggestions
The following is a list of possible projects
for CPSC 633a. These are simply recommendations. You are encouraged to come up
with your own ideas for topics. Some suggested projects will need kernel access,
and I will provide the machine. Some other projects maybe best carried out using
simulations. A nice network simulation package is ns-2.
The package is written in C++ and Tcl. You can read the
online tutorial. You can also listen to the workshop
online.
- TBIT
We have covered several versions of TCP congestion control: TCP/Tahoe,
TCP/Reno, TCP/Vegas. Each version has various requirements and options that
should be provided as specified in various RFCs. TCP is designed such that
the TCP receiver is the same under these various versions, and that the
differences are implemented at the sender side of the protocol. What is the
fraction of servers that are implementing a particular version of TCP? Are
they implementing all of the options correctly? TBIT is a tool developed at
ACIRI (AT\&T Center for Internet Research at ICSI in Berkeley, CA) that
connects with a server using a TCP connection and probes the server by
varying the behavior at the receiver end in such a way that it is possible
to distinguish between versions, and can test whether certain options /
requirements are being met. Researchers at ACIRI have posed several
questions that their TBIT tool cannot yet detect, but that they are
interested in adding to the tool.
Starting point: http://www.aciri.org/tbit.
Groups embarking on this project would likely get the opportunity to
interact with the folks at ACIRI, which
is in itself a fantastic opportunity.
- Security attacks on end-to-end congestion control
The stability of the Internet depends on end-to-end congestion
control. An end-to-end congestion control scheme assumes the cooperation
of many entities: senders, receivers, and network elements such as
routers. When some of these entities turn malicious, they can cause harm to
the network. What are the attacks? How do you identify and prevent the
attacks?
Starting point: papers in the class.
- Application-specific evaluation of new TCP-friendly
congestion control algorithms
In this class, we will read [BBFS01],
which evaluates the effects of new TCP-friendly congestion control
algorithms. However, the evaluation is independent of any application. It
will be more informative if you evaluate the algorithms in the context of a
specific class of applications such as streaming media. Also, in order to
carry out this project, you will need to use Congestion Manager, which is a
modification of the Linux kernel. Therefore, the experience can be pretty
rewarding.
Starting point: papers in the class. Congestion
manager.
- Improve secure group key management by considering
user behaviors
In this class, we will evaluate the key tree algorithm for implementing
group key management. However, the design and the evaluation of the
algorithm did not consider the effects of user behaviors. What if the users
have different behaviors? For example, some users may join and leave more
frequently than other users. Where do we place such active users on the key
tree? If we can partition the users into several groups, how do we partition
the users if we know their profile? Even further, what if a user may join
several correlated groups, how do we structure the key tree for related
groups? A single key tree for related groups? Several key trees?
Starting point: papers in the class.
- Comparison and improvement of content based routing
algorithms
One recent trend in computer networks is peer-to-peer networks. Compared
with the current Internet, one class of peer-to-peer networks can be
considered as a distributed storage system with the capability to search by
content. In other words, one of the most interesting perspective of
peer-to-peer network is their content based routing algorithms. Researchers
have already proposed several schemes. It will be interesting to compare
them. Also, you may consider possibilities to improve the performance.
Starting point: The papers in class reading list. Also, http://www.openp2p.com/
- Search by key words in content based routing
algorithms
All of the proposed content based search algorithms assume that the
searching user knows the key of a data item. Usually, the key is generated
by applying a hash function on the content of a data item. Therefore, if a
user does not know about the item, the user cannot search for the item. An
interesting question then is whether we can allow search by partial
matching. To make the problem more interesting (you don't need to consider
this constraint), most systems are designed to disallow key words searching
due to legal and security concern. Can you strike a balance?
Starting point: The papers in class. Also, http://www.openp2p.com/
- Will GPS clock help?
One performance issue of TCP is that the clocks of a TCP sender and its
receiver are not synchronized. Therefore, the sender can only measure
round-trip time, which adds the noise from the feedback loop. By equipping
end systems with cards that receive GPS synchronized clock signals, the
sender and the receiver can accurately measure delays within 1-2 micro
second. Can this improve performance? When? By how much? Is this a good
idea?
- Retransmission strategy in wireless environments
In wireless environment, packet losses are highly correlated; therefore it
is possible that many packets in a window are dropped. In this scenario, can
we improve performance by using the go-back-n scheme for retransmission? In
other words, will it be beneficial if we retransmit all the outstanding
packets after we detect a packet loss?
- Hot-spot in wireless mobile networks
There are two types of wireless and mobile infrastructure: cell-based and ad
hoc based. In the cell-based infrastructure, everything needs to go through
the base station: communications between devices in the same cell as
well as communications between a device in the cell and the outside world.
In an ad hoc network, devices communicate with each other without going
through the base station. Some recent study shows that the majority of the
traffic is between a device and the Internet. That is, even if we have ad
hoc network, we still need to go through the base station to the other
network. Can we gain performance if we have a hybrid architecture?
Starting point: [CW01]
Chunming Chao, Hongyi Wu, "iCAR: an Integrated Cellular and Ad-hoc
Relay System". In Proceedings of International Conference on Computer
Communications and Networks (IC3N), Las Vegas, NV, Oct. 2001.
- Reliable multicast transport protocols in wireless
Ad Hoc networks
Ad-hoc networks are collections of mobile nodes communicating using wireless
media, without any fixed infrastructure. Conventional reliable multicast
schemes are inadequate in these scenarios, as the mobility aspect can cause
rapid and frequent changes in network topology. How do you implement
reliable multicast in this environment? Are there tradeoffs in terms of
power usage and wireless bandwidth consumption? To visualize the importance
of this problem, you can think of the ad hoc nodes as the agents of US
special workforce carrying on operations in Afghanistan.
Starting point: http://www.isi.edu/imahn/
- RED inference
Random Early Detection (a.k.a., RED) is a policy being implemented in
Internet routers whereby packets are dropped probabilistically, where the
probability that a packet is dropped is an increasing function of the
exponentially weighted average of the size of the router's buffer in use. Is
RED working in practice? Before this can be answered, one first needs to
know to what extent is RED being deployed. However, it is often not easy to
ascertain this information from ISPs. Can it be inferred from the end-hosts
as to whether or not a router, or set of routers along a given network path,
are using RED? Can you detect their configurations?
Starting point: [FJ93]
- Bandwidth allocation for multi-party applications
Multi-party applications, such as Internet video/audio broadcasts, or
videoconferencing, have the potential to usurp large shares of network
bandwidth. How should these protocols be designed so that they share
bandwidth ``fairly'' with other applications using the Internet? This
problem has two interesting parts: what is a ``fair'' share for such an
application? Once it is determined what is meant by a fair share, how can
this fair share be realized?
- Soft state tradeoffs
Soft-state is a mechanism style used by communication protocols in which a
client node maintains some default state that it reverts to if it is not
contacted by its parent periodically. The mechanism increases the fault
tolerance of the network because when network faults partition
communication, the system reverts to a default state. However, soft-state
has a cost in terms of the bandwidth required to simply keep the connection
alive, as well as the automatic termination of connections during a
partition. Is there a good model that can be used to examine these
tradeoffs?
Starting point: [RC99]
Suchitra Raman and Steven McCanne, "A Model, Analysis, and Protocol
Framework for Soft State-based Communication". In Proceedings of ACM
SIGCOMM 1999.
- Network infrastructure for ubiquitous devices
It is commonly stated that soon, every household device will have its own IP
address and will become a component of the Internet. What sorts of changes
need to be made to the current IP infrastructure to support the needs, as
well as the numbering, of these devices?
- New name service
DNS was designed under the assumption that hosts would remain fixed.
However, it turns out that people are not interested in reaching a given
host - they are more interested in reaching the information that might exist
on the host. DNS is not designed to handle information that might reside on
a set of hosts for a short period of time before being moved to other sites.
How can DNS be modified to support these migration services, or should a new
protocol be designed to replace current DNS?
Starting point: The
Active Names project.
- Aggregated multicast support in NS2