We submitted a paper to a conference (ICNP) which is quite selective (30 papers accepted for 170 submissions). Grade-wise, the reviews of our paper are decent (three 'accept with room' and one 'accept'). The conference being very competitive, our paper did not get accepted, but I admit to being irritated enough by the strange reasoning of some of the reviews to pen this post.
One reviewer states: 'The evaluation considers only two network graph types... In both cases, they only consider relatively small networks, up to 10^5 nodes. Hence, while the results obtained are indicative, they don't really support the claims of 'extensive simulations' and 'close to optimal on a wide variety of topologies.'
This floored me: 10^5 nodes for routing simulation is actually A LOT. If you want to evaluate your routing on PlanetLab, you get 800 nodes. If you use NS, you can't do over a 1,000 nodes. The other papers we looked at get to 1,000 nodes, maybe 10,000 at the most. We had to implement our own emulator from scratch to get to 300,000 nodes. And it's hardware limitations that stopped us at that scale: with 4GB in RAM, the matrix to describe the network connectivity, if not optimized, is already 10^10 bits on a 10^5 nodes network.
Reviewer keeps going: instead of using home-brewn [sic] scale free graphs, it would be better to use graph generators that are known to produce topologically similar graphs to the Internet; consider e.g. BRITE or INET. This refers to the fact we use, quotes the reviewer, Barabasi-Albert scale free networks, which are known to be somewhat different from the real graphs met in the internet.
So this reviewer wants us to use LARGE graph, generated from BRITE, but NOT Barabasi-Albert. Looking at the BRITE doc, however, one can see that BRITE defines LARGE as 10^5 (see Item 4 on the wish list (link to pdf)); and that BRITE uses...(click to enlarge)
... Barabasi-Albert to create Internet-like topologies.
Another reviewer mentions: The main problem with the paper is that there is no comparison with recent research in the area of designing routing protocols with small stretch and small per-node state. Specifically, there are at least two lines of recent research that are worth comparing against:
- Practical implementation of Compact Routing [the reviewer mentions S4]
- Routing based on distributed Delaunay Triangulation (which guarantees that greedy routing always works) [and the reviewer cites two more papers on Delaunay triangulation].
But, we can't compare with compact routing, because, as our paper states: "The idea of trading-off path stretch for routing table size is the core component of the work on compact routing. Thorup et al. show that it is possible to guarantee a path stretch of three with a route table size growing as O(\sqrt{n \log(n)})." This is much higher than our polylogarithmic scheme. S4 selects K beacons which have to maintain path in between them, and each node has an associated cluster, where the cluster size is about sqrt(n) when the number of beacon is sqrt(n). We consider log(n), how can we compare?
We can't compare with routing on Delaunay triangulations (DT), because we consider graphs, not Euclidean spaces. Plus, DT requires planar connectivity, which we don't assume. We start from a graph G=(E,V) and we try to find a representation in a Euclidean space. To get a DT, you need to be in the Euclidean space first!
So the "main problem" with our paper is that we don't consider work that, while excellent, is not relevant to our results (granted, the reviewer gave us 'accept if room' so the main problem is not a deal breaker for him either). Still, this drives home the point of hypercriticality.
Wednesday, July 21, 2010
Subscribe to:
Posts (Atom)