Tuesday, May 22, 2007

Iterative Scheduling Algorithms

Iterative Scheduling Algorithms, Mohsen Bayati, Balaki Prabhakar, Devavrat Shah, Mayank Sharma, in Proc. of Infocom'07, Anchorage, May 2007.

A paper where I attended the session at Infocom and made a note to read the document. This is basically a distributed iterative algorithm to schedule the packets to transfer between the input and the output of a switch with no speed up.

The algorithm is an auction algorithm proposed by Bertsekas where the bids are based on the queue length at each input buffer (actually, at each virtual output queue in the input buffer, namely the virtual buffer corresponding to the packets requesting a specific output buffer). The details of the algorithm are not particularly intuitive, a reaction I've had at the presentation itself and reading the paper.

The algorithm enacts a maximum weight matching approximation, which can be arbitrarily good, and the throughput is thus almost optimal. This improves over the previous switch scheduling algorithm (iSLIP in particular).

Anyhow, the reason I was interested was that the algorithm supposedly could be applied to wireless networks. While the result in the paper is extremely valuable, I am disappointed on this count. First, the number of required iterations seems impractical for an actual wireless networks. Second, it is not clear to me who should be participating in the auction algorithm or how. I guess I would need to see a protocol. But the switch allows all inputs and outputs to exchange bids and assignments easily, something that I do not see in the wireless mesh context.

Saturday, May 19, 2007

Starvation Mitigation Through Multi-Channel Coordination in CSMA Multi-hop Wireless Networks

Starvation Mitigation Through Multi-Channel Coordination in CSMA Multi-hop Wireless Networks, Jingpu Shi, Theodoros Salonidis, Edward Knightly, in Proc. of MobiHoc'06, May 22-27, Florence, Italy.

This paper follows up on the Infocom paper of the same team (replace the first author with M. Garetto) which identified some starvation conditions based on the mesh topology in networks using a CSMA/CA scheme. Basically (I don't think I wrote up that Infocom paper here, did I?) the 802.11 protocol fail in some scenarios based on the asymmetric distribution of protocol messages. For instance, if A initiates a communication with a and B with b, and if B is in range of a, but not A, then B hears the protocol messages of the A-a pair (at least a's CTS) while A hears nothing of the messages from the B-b pair. This of course advantages B.

This paper suggests a multi-channel protocol with a simple contending control channel and data transmission are allocated on the other channels based on the information available at the node trying to initiate a connection. Only the channels for which the node knows the transmission information are being considered for transmission, the other (unknown) channels are unavailable. The node can thus contend fairly to these channels it has information about, since it knows when they become available.

This is not dissimilar to some multi-channel proposal I saw where nodes announce they have freed the channel when they come back to the control channel. The trade-off: more messages on the control channel in one case (more collisions thus) vs. better knowledge of the NAV for each channel, ie. more channels to use for transmissions. I guess the parameter that control the trade-off would be the amount of data transferred per channel negotiation. The larger the data, the more one benefits knowing which channels are available and the fewer messages on the control channel.

This paper here is quite simple and elegant, however, since it only requires the sam number of messages as those defined in the 802.11s multi-channel mode I believe.

Thursday, May 17, 2007

Gateway Placement Optimization in Wireless Mesh Networks with QoS Constraints

Gateway Placement Optimization in Wireless Mesh Networks with QoS Constraints, Bassam Aoun, Raouf Boutaba, Youssef Iraqi, Gary Kenward, in IEEE Journal on Selected Areas in Communications, Nov. 2006, Vol. 24, No. 11, pp. 2127-2136.

This paper computes the grouping of wireless mesh nodes around a cluster head in order to satisfy some constraints (mostly max number of hops for any node to reach the gateway). The algorithm is a recursive use of a gready Dominated Set algorithm (find the node with highest degree, remove this node and its neighbors, then repeat) to construct clusters of the desired size.

Actually, it seems that the iteration would yield clusters of size increasing geometrically (of the order 2^n after n iterations) so that the maximum diameter better be a power of two for the algorithm to work optimally.

Nonetheless, the idea is interesting, the algorithm relatively simple and the paper is very readable (despite using word instead of latex in the print-out I have; maybe the actual journal version looks better after the editing from the journal).

Revising buffering in multihop CSMA/CA wireless networks

Revising buffering in multihop CSMA/CA wireless networks, Olivier Dousse, in Proc. of IEEE SECON'07.

This paper is more a position paper than a technical paper: it presents some new directions to investigate, and some ideas to get thoughts rolling.

It considers a wireless multi-hop tandem network and considers the achieved throughput when using CSMA/CA. The paper observes that for a long line, the packets are accumulated at the beginning, then sparely spaced over the rest of the line. The author then suggests to impose some rules, which are in essence a rate limiting mechanism.

The imposed rules force a structure on the problem which is tractable as a Markov chain. Some of the additional assumptions to optimize the problem are a bit unrealistic (all packets of equal size, all buffers of size one, if a flow is bi-directional, then exactly the same number of packets in each direction) but just bringing up these ideas and deriving the consequences is refreshing.

I have some doubts about the derivations (actually, the maths are only sketched) but the first proposition states that, for N links in series, the average occupancy at the i-th buffer is equal to 1 - the average occupancy at the (N-i)-th buffer. That would force the average occupancy at 1/2 for a buffer where i = N-i. However, if l links interfere, then only 1 buffer is occupied, and the next l-1 are empty, so the average occupancy is 1/l. Anyhow, it is a paper which thinks outside of the box, so this is not very consequential.

Wednesday, May 16, 2007

Robust Geo-Routing on Embeddings of Dynamic Wireless Networks

Robust Geo-Routing on Embeddings of Dynamic Wireless Networks, Dominique Tschopp, Suhas Diggavi, Matthias Grossglauser, Jörg Widmer, in Proc. of IEEE Infocom 2007, Anchorage, May 2007.

This (and if I get to it, the next few posts) is a paper whose Infocom talk I attended, and I made a note to get to the paper and read it. I have to say that the subject of embeddings of virtual coordinates to allow for routing in ad hoc networks is a much more traveled area than I would have suspected.

The idea of the paper is to build some embedding based on the distance of a few beacons and the graph distance between a node and the beacon. Basically, if I get it right, the nodes receive some beacons with a hop count h and a position p for the sender of the beacon. It infers a virtual position based on h and p. The embedding has a higher dimensionality (even though beacons are in the plane). The beaconing is probabilistic, as the system is dynamic and getting a random beacon from the set of beacon nodes evens out the chance that some beacons gets in a degenerated situation. The virtual coordinates are averaged over a local neighborhood to ensure consistency.

Based on the embedding, one can perform geographic routing on the virtual coordinates. The proposed algorithm is described and evaluated through simulation (no analysis, which looks like it would be extremely complex). The performance of the proposed Probabilistic Beaconing (PB) is much better than all the stuff they compare it with (mostly Beacon Vector Routing, NoGeo and greedy routing on the real coordinates).

Paper is not that easy to read, I have to admit. I blame the 9p limit, which forces authors to condense their stuff beyond readability.

Tuesday, May 15, 2007

The Pareto or Truncated Pareto Distribution? Measurement-Based Modeling of Session Traffic for Wi-Fi Wireless Internet Access

The Pareto or Truncated Pareto Distribution? Measurement-Based Modeling of Session Traffic for Wi-Fi Wireless Internet Access, Edward Chlebus, Gautam Divgi in Proc. of WCNC'07, Hong-Kong, March 2007.

This paper was pointed out to me by its first author at the conference, and I eventually got to read it. It is a pretty simple analysis of data gathered from an Australian hot-spot operator, and it finds the session length distribution for session connection at a wifi hotspot which ends connection over 5 hours long. Thus the truncated Pareto distribution. It is pretty basic statistical analysis: discovering a distribution that fits by observation of the data, then fitting the model parameters to get a good match.

This could be useful when actually simulating wifi sessions. I am not sure there is a significant difference in the behavior of wifi users on an unlimited plan wrt to the other internet users (say, DSL), so those models should be similar. I guess the truncation is the novelty introduced by the data here, as I recall that internet traffic follows a Pareto distribution or some power law. I have to get back to On the self-similar nature of Ethernet traffic. Interestingly enough, there is no mention of the difference between wired and wifi users here.