Tuesday, May 22, 2007

Iterative Scheduling Algorithms

Iterative Scheduling Algorithms, Mohsen Bayati, Balaki Prabhakar, Devavrat Shah, Mayank Sharma, in Proc. of Infocom'07, Anchorage, May 2007.

A paper where I attended the session at Infocom and made a note to read the document. This is basically a distributed iterative algorithm to schedule the packets to transfer between the input and the output of a switch with no speed up.

The algorithm is an auction algorithm proposed by Bertsekas where the bids are based on the queue length at each input buffer (actually, at each virtual output queue in the input buffer, namely the virtual buffer corresponding to the packets requesting a specific output buffer). The details of the algorithm are not particularly intuitive, a reaction I've had at the presentation itself and reading the paper.

The algorithm enacts a maximum weight matching approximation, which can be arbitrarily good, and the throughput is thus almost optimal. This improves over the previous switch scheduling algorithm (iSLIP in particular).

Anyhow, the reason I was interested was that the algorithm supposedly could be applied to wireless networks. While the result in the paper is extremely valuable, I am disappointed on this count. First, the number of required iterations seems impractical for an actual wireless networks. Second, it is not clear to me who should be participating in the auction algorithm or how. I guess I would need to see a protocol. But the switch allows all inputs and outputs to exchange bids and assignments easily, something that I do not see in the wireless mesh context.

Saturday, May 19, 2007

Starvation Mitigation Through Multi-Channel Coordination in CSMA Multi-hop Wireless Networks

Starvation Mitigation Through Multi-Channel Coordination in CSMA Multi-hop Wireless Networks, Jingpu Shi, Theodoros Salonidis, Edward Knightly, in Proc. of MobiHoc'06, May 22-27, Florence, Italy.

This paper follows up on the Infocom paper of the same team (replace the first author with M. Garetto) which identified some starvation conditions based on the mesh topology in networks using a CSMA/CA scheme. Basically (I don't think I wrote up that Infocom paper here, did I?) the 802.11 protocol fail in some scenarios based on the asymmetric distribution of protocol messages. For instance, if A initiates a communication with a and B with b, and if B is in range of a, but not A, then B hears the protocol messages of the A-a pair (at least a's CTS) while A hears nothing of the messages from the B-b pair. This of course advantages B.

This paper suggests a multi-channel protocol with a simple contending control channel and data transmission are allocated on the other channels based on the information available at the node trying to initiate a connection. Only the channels for which the node knows the transmission information are being considered for transmission, the other (unknown) channels are unavailable. The node can thus contend fairly to these channels it has information about, since it knows when they become available.

This is not dissimilar to some multi-channel proposal I saw where nodes announce they have freed the channel when they come back to the control channel. The trade-off: more messages on the control channel in one case (more collisions thus) vs. better knowledge of the NAV for each channel, ie. more channels to use for transmissions. I guess the parameter that control the trade-off would be the amount of data transferred per channel negotiation. The larger the data, the more one benefits knowing which channels are available and the fewer messages on the control channel.

This paper here is quite simple and elegant, however, since it only requires the sam number of messages as those defined in the 802.11s multi-channel mode I believe.

Thursday, May 17, 2007

Gateway Placement Optimization in Wireless Mesh Networks with QoS Constraints

Gateway Placement Optimization in Wireless Mesh Networks with QoS Constraints, Bassam Aoun, Raouf Boutaba, Youssef Iraqi, Gary Kenward, in IEEE Journal on Selected Areas in Communications, Nov. 2006, Vol. 24, No. 11, pp. 2127-2136.

This paper computes the grouping of wireless mesh nodes around a cluster head in order to satisfy some constraints (mostly max number of hops for any node to reach the gateway). The algorithm is a recursive use of a gready Dominated Set algorithm (find the node with highest degree, remove this node and its neighbors, then repeat) to construct clusters of the desired size.

Actually, it seems that the iteration would yield clusters of size increasing geometrically (of the order 2^n after n iterations) so that the maximum diameter better be a power of two for the algorithm to work optimally.

Nonetheless, the idea is interesting, the algorithm relatively simple and the paper is very readable (despite using word instead of latex in the print-out I have; maybe the actual journal version looks better after the editing from the journal).

Revising buffering in multihop CSMA/CA wireless networks

Revising buffering in multihop CSMA/CA wireless networks, Olivier Dousse, in Proc. of IEEE SECON'07.

This paper is more a position paper than a technical paper: it presents some new directions to investigate, and some ideas to get thoughts rolling.

It considers a wireless multi-hop tandem network and considers the achieved throughput when using CSMA/CA. The paper observes that for a long line, the packets are accumulated at the beginning, then sparely spaced over the rest of the line. The author then suggests to impose some rules, which are in essence a rate limiting mechanism.

The imposed rules force a structure on the problem which is tractable as a Markov chain. Some of the additional assumptions to optimize the problem are a bit unrealistic (all packets of equal size, all buffers of size one, if a flow is bi-directional, then exactly the same number of packets in each direction) but just bringing up these ideas and deriving the consequences is refreshing.

I have some doubts about the derivations (actually, the maths are only sketched) but the first proposition states that, for N links in series, the average occupancy at the i-th buffer is equal to 1 - the average occupancy at the (N-i)-th buffer. That would force the average occupancy at 1/2 for a buffer where i = N-i. However, if l links interfere, then only 1 buffer is occupied, and the next l-1 are empty, so the average occupancy is 1/l. Anyhow, it is a paper which thinks outside of the box, so this is not very consequential.

Wednesday, May 16, 2007

Robust Geo-Routing on Embeddings of Dynamic Wireless Networks

Robust Geo-Routing on Embeddings of Dynamic Wireless Networks, Dominique Tschopp, Suhas Diggavi, Matthias Grossglauser, Jörg Widmer, in Proc. of IEEE Infocom 2007, Anchorage, May 2007.

This (and if I get to it, the next few posts) is a paper whose Infocom talk I attended, and I made a note to get to the paper and read it. I have to say that the subject of embeddings of virtual coordinates to allow for routing in ad hoc networks is a much more traveled area than I would have suspected.

The idea of the paper is to build some embedding based on the distance of a few beacons and the graph distance between a node and the beacon. Basically, if I get it right, the nodes receive some beacons with a hop count h and a position p for the sender of the beacon. It infers a virtual position based on h and p. The embedding has a higher dimensionality (even though beacons are in the plane). The beaconing is probabilistic, as the system is dynamic and getting a random beacon from the set of beacon nodes evens out the chance that some beacons gets in a degenerated situation. The virtual coordinates are averaged over a local neighborhood to ensure consistency.

Based on the embedding, one can perform geographic routing on the virtual coordinates. The proposed algorithm is described and evaluated through simulation (no analysis, which looks like it would be extremely complex). The performance of the proposed Probabilistic Beaconing (PB) is much better than all the stuff they compare it with (mostly Beacon Vector Routing, NoGeo and greedy routing on the real coordinates).

Paper is not that easy to read, I have to admit. I blame the 9p limit, which forces authors to condense their stuff beyond readability.

Tuesday, May 15, 2007

The Pareto or Truncated Pareto Distribution? Measurement-Based Modeling of Session Traffic for Wi-Fi Wireless Internet Access

The Pareto or Truncated Pareto Distribution? Measurement-Based Modeling of Session Traffic for Wi-Fi Wireless Internet Access, Edward Chlebus, Gautam Divgi in Proc. of WCNC'07, Hong-Kong, March 2007.

This paper was pointed out to me by its first author at the conference, and I eventually got to read it. It is a pretty simple analysis of data gathered from an Australian hot-spot operator, and it finds the session length distribution for session connection at a wifi hotspot which ends connection over 5 hours long. Thus the truncated Pareto distribution. It is pretty basic statistical analysis: discovering a distribution that fits by observation of the data, then fitting the model parameters to get a good match.

This could be useful when actually simulating wifi sessions. I am not sure there is a significant difference in the behavior of wifi users on an unlimited plan wrt to the other internet users (say, DSL), so those models should be similar. I guess the truncation is the novelty introduced by the data here, as I recall that internet traffic follows a Pareto distribution or some power law. I have to get back to On the self-similar nature of Ethernet traffic. Interestingly enough, there is no mention of the difference between wired and wifi users here.

Monday, April 30, 2007

Bandwidth Balancing in Multi-Channel IEEE 802.16 Wireless Mesh Networks.

Bandwidth Balancing in Multi-Channel IEEE 802.16 Wireless Mesh Networks Claudio Cicconetti, Ian Akyildiz, Luciano Lenzini to appear in Proc. of INFOCOM'07, Anchorage, AK, May 2007.

This paper presents a way to improve on the fairness of 802.16 distributed scheduler for mesh networks. It is interesting for me, since the standard for 802.16 distributed scheduling came from the work of a group that was located in the same office as I am, the rooftop wireless router.

Anyhow, the paper is pretty well made and presents the key idea nicely. It is to my knowledge the first paper considering fairness in the mesh 802.16 standard. The answer to the fairness issue exhibited by 802.16 is pretty straightforward, but you had to identify the problem in the first place.

The solution is distributed as well, of course, and maintains limited state: a few buffers per neighbors. There is no complicated optimization, just a simple weighing mechanism to split the bandwidth fairly among the active flows. Nice performance evaluation using an NS-2 implementation of the 802.16 distributed mesh function that I'd like to get my hands on.

Saturday, March 17, 2007

On mitigating the broadcast storm problem with directional antennas

On mitigating the broadcast storm problem with directional antennas, Chunyu Hu, Yifei Hong, Jennifer Hou, IEEE International Conference on Communications, 2003. ICC'03. Vol. 1, Issue , 11-15 May 2003 Page(s): 104 - 110.

This is a short, breezy read. It considers three schemes to reduce the broadcast storm problem: one is to not rebroadcast on the sector which receives the packet to be broadcast; the second is to identify some relay nodes in each sector and to have these rebroadcast, hoping that these relays are enough to cover all the nodes; the third is to pick the relay which maximizes the new coverage. This means that the scheme retransmits first in the sector for which there is a node which covers the most new ground (ie. its own coverage area minus the area covered by forwarding the broadcast to this node in this sector).

The three schemes are intuitively sounds, and bring some performance benefits when evaluated through QualNet simulations. J. Hou is principal investigator for J-Sim, a simulation environment to perform this kind of evaluation, I guess qualnet was not the perfect tool then (for one thing, it is a commercial tool, whereas JSim is freely available).

Inconsequential details: this paper is written using word, which makes the equations look funky. And there is a little mix-up: "This is termed as the broadcast storm in [20]" in the abstract, and "This is termed as the broadcast storm in [7]" in the intro. The abstract is correct.

Opportunities and Challenges in Mesh Networks Using Directional Anntenas

Opportunities and Challenges in Mesh Networks Using Directional Anntenas, Gupqing Li, Lily Yang, W. Steven Conner, Bahar Sadeghi, in Proc. of WiMesh 2005, Santa Clara, Sept. 2005.

This is an interesting survey. I am now debating whether or not I should attend/participate in conferences, since I attended this workshop. I even chaired a session. Yet, I have no recollection of this paper, despite the fact that it is actually a good and interesting survey of the use of directional antennas in mesh networks. [In my defense, I was tutorial chair of the conference associated with the workshop, IEEE SECON, and kinko's was supposed to have made the tutorial hand-out the Saturday before, which they failed to do. They said it would be ready at 6am on the Monday morning of the tutorials/WiMesh workshop, but it was not ready when I showed up: the binding machine had broken down. I had to pick up whatever they had printed out --which they wanted me to pay full price for it, despite it being useless!!!--, go down to Santa Clara to another kinko's where they did the binding just in time for the 9am tutorial start. At that point, I was rather frazzled. I hate kinko's].

It does a good job at presenting the most salient issues, providing the key reference, and giving accessible definitions and insights into the concepts. This is a good place to start if interested about directional antennas in mesh networks, a fact that I realize only after getting deep into the literature. My fault: with such a title, I should have put it on top of the pile.

Friday, March 16, 2007

A Topology Control Approach to Using Directional Antennas in Wireless Mesh Networks

A Topology Control Approach to Using Directional Antennas in Wireless Mesh Networks, Umesh Kumar, Himanshu Gupta, Samir Das, in Proc. of IEEE International Conference on Communications (ICC), 2006.

I read the paper as it was referenced in some other paper, and I have to say: I am shocked to discover it is an ICC paper. I attended ICC'06, I even presented a paper there, and this paper is much better than anything I have seen there at the time (including my own stuff, to which I am very partial). This paper is very cute.

The idea is simple: what if you assume that you want to make a network using directional antennas but you don't want to get into writing new MACs (which is a hard thing to do) but just want to use legacy off-the-shelf hardware? This paper presents a way to orient the antennas so as to cover not the whole area around the nodes, but only a fraction, so as to minimize interference.

The key is to pick the number of antennas and their orientation so as to keep the network connected (in the last resort, use a few omni-directional antennas). The algorithm to select which nodes to pick is a spanning tree building algorithm, but its application in this context is really cute and smart. The paper gives up on the total connectivity (some links might be unavailable here that would be in an omni-directional context) but the reduction of the interference more than makes up for it. Namely, pick k antenna with total covered angle k \theta, where k \theta is strictly less than 360 degrees.

This is very practical too, as most mesh network deployment don't really care for the maximal connectivity, but only care to find a good path to the internet gateway. So this is an elegant and economical solution.

The funny thing is that, despite reducing the number of paths in the network (more exactly, by putting a ceiling on the degree of the nodes), the shortest path is shorter here than for the omni-directional shortest path, due to the longer transmission range of the narrow beam antennas. Also, the PDR really kicks ass over the omni-directional set-up. The results are so impressive that really, this should have been an Infocom paper, not an ICC. I even checked for Turkish names in the author list as an explanation why they submitted to a conference in Istanbul.

Ad Hoc Routing Using Directional Antennas

Ad Hoc Routing Using Directional Antennas, Romit Roy Choudhury, Nitin Vaidya. Technical Report.

I believe this technical paper is an early version of Performance of Ad Hoc Routing using Directional Antennas, Romit Roy Choudhury, Nitin Vaidya, Journal of Ad Hoc Networks - Elsevier Publishers, November, 2004. At least the ideas and most of the content seems similar enough that I will only read the tech report, not the journal paper.

Like the previous paper I wrote about (just below), this one modifies the MAC protocol to support directional antennas, by having omni-directional listening, and directional transmitting of CTS and RTS. It also considers the impact of the directional antenna on routing, namely the DSR protocol, and how to improve DSR to support directional antennas.

The key insight is that: directional antennas improve the performance a lot when the network is random and sparse, and when the directional transmission range is significantly longer than the omni transmission range. Basically, there is a great gain when the transmission beam is narrow and focused. In other set ups, the paper does not find a great gain from directional antenna, a rather counter-intuitive result.

The paper is mostly a performance evaluation using qualnet simulation. Also, the paper takes into account the sweeping time, ie. the delay in broadcast (for DSR) incurred by transmitting the broadcast packet sequentially over the different sector (ie. sweeping) over an omni-directional broadcast to all neighbors at the same time.

An interesting idea to reduce the sweeping time is to broadcast not to all neighbors, but only to those in a few sector opposite from the one receiving sector of the broadcast. I'm curious to see whether this would not yield to route requests which miss the destination (however, it is hard to miss it, since the neighbors of the destination are as good as the destination, and since many RREQs are redundant anyway).

Medium Access Control Protocols Using Directional Antennas in Ad Hoc Networks

Medium Access Control Protocols Using Directional Antennas in Ad Hoc Networks, Young-Bae Ko, Vinaychandra Shankarkumar, and Nitin Vaidya, in Proc. of IEEE Infocom 2000.

The second reviewer had me wondering if I usually write the full name or only the first initial. If you read the paper, see how his (her?) email handle is much more convenient.

The paper is a look into how to modify the 802.11 protocol to support directional antenna. Basically, due to deafness and blind terminal issues, you still need to send omni-directional control packets, but in some situation, you should send some that are directional, using only on antenna sector to transmit. First: you should always use directional antenna for the transmission. The paper details two schemes to use directional control messages.

The paper is relatively simple in its insights, but that's what you get for blazing a new trail, you have to get the low hanging fruits first. The directional MACs proposed in the paper (reasonably) assume that the location (in terms of which sector is best to use to transmit/receive to them) of the neighbors is known.

The two schemes are: 1-using directional RTS (so that you don't block nodes away from where you are transmitting); 2-using both ORTS and DRTS depending on the current status of the corresponding sector.

The performance evaluation is rather basic, using a 5x5 grid with some simple scenarios. I would have enjoyed a more realistic set-up. For instance, directional antenna performance is different on a grid, because nodes line up within a sector, so it artificially increase the interference. Also, I don't know if the baseline comparison is directional for 802.11 or not. It looks like what is being compared is:
-basic 802.11 with omni-directional antenna; and -proposed MAC with directional antenna. That is 2 parameters here (basic 802.11 vs proposed MAC, and omni vs. directional) in the simulation, while the results are used to quantify and discuss the impact of only the second parameter.

The interference level in the network for bare 802.11 would significantly decrease when using directional antenna without modifying the MAC, thus improving the performance. I don't see this explicit here. It would have been nice to quantify the impact of pure directional antennas, as there is an area of space where the interference of an on-going connection is strong enough to disturb another connection attempt nearby, but not strong enough for the nodes to successfully decode the MAC packets and update the NAV tables. For those, using a basic 802.11 with directional antennas and appropriate power control would go a long way improving performance.

Monday, March 5, 2007

Information-theoretic bounds for mobile ad hoc networks routing protocols

Information-theoretic bounds for mobile ad hoc networks routing protocols, Nianjun Zhou, Alhussein Abouzeid, Lecture Notes in Computer Science: Information Networking, vol. 2662, pp. 651-61, October 2003.

This paper is a bit frustrating. I am still looking around how much information to store for routing in ad hoc network (I guess the trilogy of posts below is not over). Anyhow, I found a version of this paper, which is probably a pre-draft or the submitted version, or whatnot, but not the final version (which is being a paying wall at Springer).

So it is not as polished as I'd like to fully enjoy the paper. For instance, it computes some lower bounds on the amount of information required to keep a proactive routing protocol updated. It is fine, but it should be explained: why is a lower bound enough? What good does it do to us?

First, the argument is combinatorial, and based on dividing the network into relays and regular nodes. The number of information is computed based on how many nodes are required to divide the nodes in different groups,and for each group, how to elect a leader.

But it is a lower bound, so it does not tell you the amount of information that is indeed required. It does not say how tight the bound is. So it says: well, at least you need this much, which at least tells you that the performance is as least as good as with the bound. So the bound can be used to tell you the best case performance, but the version I have does not make the point. For instance, if the bounds gives a bad performance, then the system will be worst, so it might be useful to know.

It makes, on the other hand, comparisons between two lower bounds: "Clearly, a large amount of saving is achieved with prediction". But that is not true: the difference in the bounds is meaningful only if the actual quantity is close to the bound. Here we don't know. It is most likely true that prediction saves a lot, but you cannot deduct this by the bounds. If I give 0 as a lower bound of the predictive case, then my amount of saving increases even more, but I did not get a better result!

Anyhow, the combinatorial method is interesting, and the bound could be useful, if used in the right context.

Predictive Caching Srategy for On-Demand Routing Protocols in Wireless Ad Hoc Networks

Predictive Caching Srategy for On-Demand Routing Protocols in Wireless Ad Hoc Networks Wenjing Lou and Yuguang Fang, in Wireless Networks 8, 671-79, 2002, Kluwer.

This to complete the 3 part series on caching (see two post below). It is a paper which suggests to use an adaptive value for the route time out based on the observed lifetime of the route. Updating the lifetime of the route allows for a finer granularity of the caching, and for better performance. It is light and easy reading, and the technical contribution is pretty simple. There is some performance analysis. I found it cited in some other paper, and the reference list inside has sent me looking for some other older papers I was not aware of (especially in the proactive vs. reactive discussion).

Caching Strategies in On-Demand Routing Protocols for Wireless Ad Hoc Ne

Caching Strategies in On-Demand Routing Protocols for Wireless Ad Hoc Networks, Yih-Chun Hu, David B. Johnson, in MOBICOM 2000, Boston, MA, pp. 231-42.

This paper proves that analytical modeling in ad-hoc network research is way overrated: there is not a single complicated mathematical equation. Even the study of correlation between different metrics seems to be done by observation rather than using sophisticated statistical tools. Yet, the paper does present a pretty compelling picture of different cache strategies. And rather than struggling to define an analytical model and then show it is valid, it cuts the middle-man and shows right away the impact of different parameters using extensive simulation.

The simulation is of course the main part: it is the CMU extension of the NS-2 simulator, which was designed by the group of this paper, and which was designed to produce results like this one. It is still the de facto standard for wireless mesh network simulation (it has been integrated in NS-2 distribution now) so this is as much part of the legacy of this group as the actual results presented in the paper.

The cache strategies studied are: unlimited cache size, or finite cache size with different sizes, or different ways of caching depending on who issues the node, or different ways of updating the cache (FIFO or not).

The paper finds a better value of the active time-out of the cached routes (the previous paper below would tell you it is velocity dependent, but here they seem to agree on a 5s value for most mobility models). It also shows that the larger cache study is not necessarily good, as it means more broken routes are being carried around, which leads to suboptimal performance.

I enjoyed the "omniscient expiration" concept (the strategy which keeps the route as long as it is valid), and will use it in a future paper.

The link to the monarch project: here.

Optimizing Route-Cache Lifetime in Ad Hoc Network.

Optimizing Route-Cache Lifetime in Ad Hoc Network, Ben Liang, Zygmunt Haas, IEEE Infocom 2003.

This paper deals with building a mathematical framework to analyze the performance of ad hoc network and the effectiveness of route cache. The mathematical framework is straightforward probabilistic analysis (same tools as queueing theory, I'd say), trying to find the residual lifetime of a link (or, rather, the Laplace transform of the distribution thereof) as well, then computing the probability that the link is still up, and thus that the cost (in terms of delay) of the route discovery process is nil, the probability that the link has timed out (therefore: cost=route discovery overhead) or the probability that the cost is that of using an entry corresponding to a broken link (cost=route repair).

The analytical result involves solving some polynomial equations and inverting the Laplace transform. A closed form expression is given for the case where the arrival process and the link breakage process are memoryless (both reasonable assumption, actually).

The result shows that the optimal route time-out is independent of the packet arrival process (which is actually quite obvious when you think about it: you should keep the route as long as the network topology is valid, which is independent of the packet arrivals).

The not-technical note: Ben Liang was TPC program co-chair for IEEE MASS'06. I believe that Ben's advisor and co-author here, Zygmunt Haas was TPC chair, with Jennifer Hou, if my memory is correct. Ben is now a faculty at UToronto, and as publication chair for the conference, I ended up interacting with him quite a bit. We actually got quite a few new features implemented in EDAS to support our needs (e-copyrights, or a way to export data to fit the proceedings CD processing company requirements, thanks to Henning's willingness and responsiveness). I feel I should have known about this paper then.

Wednesday, February 28, 2007

Losing Opportunism: Evaluating Service Integration in an Opportunistic Wireless System.

Losing Opportunism: Evaluating Service Integration in an Opportunistic Wireless System. Hongseok Kim, Gustavo de Veciana, to appear in Proc. of IEEE Infocom'07.

The Infocom'07 program has been announced and you can find some of the papers here and there is you google around. I read this one, as I have been interested in opportunism and its benefit. I believe that the main function of opportunism stays in its ability to accommodate mobility: by having a flexible list of potential suitors to relay your packet, you can adapt the list to the topology dynamically. This paper somewhat pokes holes in the other benefit of opportunism: bandwidth gain.

The idea of the paper (and I will not go in the technical details, which are, unlike the intuition behind them, rather complicated) is that there is a benefit to use opportunistic mechanism, which has been quantified in a single hop, cellular-like wireless system. Basically, by picking the user with the best instantaneous condition, that user will be out-doing its average rate, and you will increase the capacity of the system. You can quantify how much this is true.

However, this comes at the cost of fairness: picking the best guy at some time, also means not picking some guy, potentially for extended periods of time. This is where Proportional Fairness for instance comes in. This is also where this paper come in, but from a different angle: it asks the question, not how to provide fairness to the users, but what if a user demands some type of service: a user with a certain level of QoS.

Users with QoS require the channel to be allocated to them whether or not a better (best effort) transmission candidate exists. The paper provides some analytical model to describe this, and some simulation results (based on their analysis). Basically, it shows that the stability region is reduced by QoS users (in the extreme, you cannot provide more than your average capacity C to a QoS user because it depends on other users being there; so if you have N channels, the capacity is CN with N QoS users, instead of the much greater capacity of an opportunistic system.

It is very intuitive that providing strict QoS requirement will cause a loss in flexibility in the choice of the node to communicate with, and thus a degradation in the performance that this flexibility provide. It is good to have an analytical framework to quantify this. I would say it is best paper award territory (based on my very partial view of the Infocom'07 paper stack).

It is particularly important to have this kind of intuition described clearly: the impact of a single QoS user is a great degradation in the system performance. One application of wireless mesh networks is to replace broadband services and to provide VoIP over these networks (say as a way for a city to repay the wireless network investment by switching its phone bill away from cellular operator to VoIP over the handset). Well, if your scheduling algorithm in the backhaul uses some similar kind of opportunism, the effect this paper describes starts to kick in: either you have higher priority to your VoIP flows, and then you lose the opportunistic effect; or you don't, and then you do get unreliable performance for your VoIP call. It boils down to a network design issue, but seems to say that you cannot put opportunism gain in the bank too much. Something to keep in mind.

Tuesday, February 27, 2007

Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden terminals Using A Single Transceiver.

Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden terminals Using A Single Transceiver. J. So, N. Vaidya, in Proc. of ACM MobiHoc 2004, Roppongi, Japan.

I read the paper, but it was a short read: it has been cited and the principles explained in a few other articles, so I just basically had to check the paper existed. But the read was fast: I had an idea what it was about, I did not have to be convinced and scratch my head too much. Plus, the idea is simple and elegant, as more often than not with Nitin.

What is not usually explained: the actual channel selection (how two nodes decide during the ATIM window to pick one channel over the other; I kinda assumed it would be a round-robin or random or something uniform like that. It is actually based on the number of previous nodes requesting the channel for transmission and a list of preference based on channel quality). And the performance analysis: mostly a comparison with DCA, a multi-channel MAC with multiple interfaces (MMAC uses only one interface). Good stuff all around.

The unrelated note: Roppongi means 6 trees.

Optimizing the Placement of Internet TAPs in Wireless Neighborhood Networks

Optimizing the Placement of Internet TAPs in Wireless Neighborhood Networks, R. Chandra, L. Qiu, K. Jain, M. Mahdian. ICNP'04.

This is strange, the version I downloaded has the first two authors inverted. It is titled Optimizing the Placement of Integration Points in Multi-Hop Wireless Networks, but it really looks like the same thing. Maybe a journal version or something. I am putting up the reference in the actual ICNP'04 proceedings, but I read the other version.

Anyhow, the paper is a follow-up to On the Placement of Web server replicas, whom Lily Qiu also co-authored. The funny thing is that I am basically following the same thought process: a student is looking into designing CDNs into wireless mesh networks, and has me reading this literature so I can be useful. Then I think: hey! CDN servers, or wired gateway, same thing, that's where the data is coming from. So on from CDN servers to "integration points" and "wired access points" which lead to a google search, where we found this. Only 3 years too late, of course.

The paper follow the same architecture as the previous one: convince the reader that you cannot really solve the problem, it's NP-hard, so you'll provide a set of heuristic strategies and test and compare them. The strategies are similar: random, greedy, etc. Of course, the wireless set-up introduces all kind of kinks which make the paper interesting and the contribution significant over the previous work.

There is a proof of the closeness of the strategy to the optimal solution, and a great match between a computable lower bound and the greedy strategy (which both book-end the optimal strategy), so the solutions proposed here are working well.

Having said enough good about the paper, I can share my little giggle. My version of the paper includes the following sentence:

In comparison [to TDMA scheduling schemes], in this paper we look at more general and efficient MAC schemes such as IEEE 802.11

More general, maybe, but more efficient? The MAC layer ends up being abstracted away sooner afterwards, so it does not really matter.

Monday, February 26, 2007

Multichannel MAC Protocols for Wireless Networks.

Multichannel MAC Protocols for Wireless Networks Ritesh Maheshwari, Himanshu Gupta, Samir Das, In Proc of IEEE SECON 2006, Reston, VA, Sept 2006

This paper I don't think I saw at the conference, even though I attended and presented a paper. I had to leave early to go to another conference, maybe that's how I missed it.

Anyhow, now I caught up with it, and it is one of these papers that are nice to read in the morning as you don't need to much caffeine to grasp what it is about: the papers present two protocols to access multiple channel with a single transmission interface (and in one case, a busy tone interface). The protocols are intuitive enough, the motivations are clear, the validation show the promised improvement. There is no analysis, just a simulation validation, so it is 8 pages of light reading.


The two protocols that are described are some variation of other protocols, which is interesting to provide some context for these other protocols. The two protocols they improve upon are RDT and MMAC. MMAC is a protocol which uses a mechanism similar to the power save mechanism of 802.11 to define a control time slot and a transmission phase. The control slot is used for the nodes to agree on which channel to switch to for transmission. Nodes should be synchronized to know when the control time slot is and when the transmission phase ends. All nodes meet on the same channel during the control slot.

RDT (Receiver Directed Transmission) is a protocol for which each nodes select a "quiescent channel", which is the channel where it will receive transmission. It switches to the quiescent channel of the receiver when it wishes to transmit to that receiver. Allocation of quiescent channel can be done off-line or dynamically.

One more protocol is mentioned in the paper: DCA. DCA (Dynamic Channel Assignment) is a two-interface protocol: one interface is the control interface, used by the nodes to schedule to which channel they will switch their other interface, which they use for the actual transmission.

Ok: that's the background. The contribution: the paper introduces a version of xRDT which is extended with a busy tone interface (a simpler-to-implement interface) and an explicit notification of the termination of the transmission, so as to reduce the deafness of the terminals (a terminal is deaf when not on his quiescent channel; a would-be transmitter needs to explicitly know when his intended receiver returns to its quiescent channel).

The paper also introduces another protocol, Local Coordination-based Multichannel (LCM)MAC, which works on a single interface and is a self-clocking (ie. nodes synchronize themselves) version of MMAC. Nodes define a transmission schedule which include a control phase and a data transmission phase. The first one to win the race to set-up the schedule wins and the other act according to that schedule.

Performance analysis is straightforward. The only odd part is that RDT is not included (DCA, MMCA and the two new protocols, xRDT and LCM-MAC are). I would assume xRDT offering an improvement on RDT would be subject to validation. Other than that, it is only a simulation, which shows the benefit of multiple interfaces over single interface, and the benefit of the new protocols over the legacy protocols.

Wednesday, February 21, 2007

MAC-Layer Anycasting in Ad Hoc Networks.

MAC-Layer Anycasting in Ad Hoc Networks Romit Roy Choudhury and Nitin Vaidya. ACM SIGMCOMM Computer Communications Review, Vol. 34, number 1, Jan. 2004.

This is a paper I should have read a long time ago about multi-hop wireless mesh networks. It is deceiptively simple: the idea is to create a set of equivalent paths from the source to the destination using the route table information (the paper assumes a table-driven routing protocol) using a hop-count metric. If the metric is such tat no routes are strictly equivalent, routes could also be ordered and the set created from the list of routes satisfying a given criteria, say within some factor of the best path according to that metric.

Anyhow, once the route set is created at the network layer, the packet is given to the MAC layer with a list of potential candidates. The MAC layer then chooses which candidate to use based on the information available at the MAC layer. So if there are two possible candidates to send the packets to, the MAC chooses the one with the best condition (for example, availability, if one has sent a CTS and the other has not; or link rate and link conditions).

So the packet's route is not set a priori, but it is evolving as the packet moves along. The packet is not duplicated, there is still only one copy forwarded at each hop, but it makes use of the diversity available at each hop at this point in time.

A few notes: the benefit is not evaluated at all in this paper, which is a huge surprise to me. It is typically a requirement from any reviewer I have encountered. I am sure there is a subsequent performance evaluation somewhere, I just haven't looked for it.

But it would actually be interesting to see the performance improvement. On a short path, there would not be any. And the performance of long path is degraded by the fact that the multiple path are long. It might improve on a basic routing protocol, but it would improve on poor performance in the first place. Anyhow, I should look for that performance evaluation follow-up which must be somewhere.

The paper discusses using reactive protocols, such as DSR, but the solution is actually rather convoluted: instead of collecting a route, the RREQ-RREP exchange must collect a whole tree routed at the destination (for the uplink) so that each node can know the place of its neighbor in the relaying structure toward the destination. So that means that each RREQ-RREP must be broadcast more than strictly necessary, and extra-flooding is usually viewed as a bad thing.

Anyhow, this is really light reading. It is only the description of the idea, with no analysis nor performance evaluation, so it is a quick and easy and interesting read.

On the Placement of Web Server Replicas

On the Placement of Web Server Replicas, L. Qiu, V. Padmanabhan, G. Voelker, IEEE Incofom 2001, 1587-96.

This paper is definitely the inspiration for this one or the other way around. One is a journal paper from 2002 and the other a conference paper from 2001, so I would have to see which one came out when.

Anyway, it covers much of the same ground: placing web server replicas to form a Content Distribution Network (CDN) is a problem which is hard, and the paper presents some heuristic algorithms to solve the problem using an AS topology (and other topologies).

The goal is to improve on the result of a paper which proves the optimal placement on a tree topology (B. Li, M. Golin, G. Ialiano, X. Deng, "on the optimal placement of web proxies on the internet", infocom 1999): the internet is a mesh, not a tree, so using the tree optimal placement result in a sub-optimal internet placement.

The heuristics are similar to here: random, greedy (pick the best spot for a replica that minimize the cost, then iterate taking into account the already place replicas), hot spot (put the replicas near the sites which generate the most traffic) and super-optimal algorithm (which is super-optimal not because it is really super-duper, but because it might not feasible, thus might perform better than the optimal; it is solved using some Lagrangian relaxation of the integer programming problem defined by the problem).

There is a neat evaluation section, over a lot of different topologies and which shows that the greedy algorithm behaves pretty good. One really cool link from the paper is the NP problem compendium web page. I mean, once you think to add the tilde to the reference provided in the paper.

Tuesday, February 20, 2007

ROMER: Resilient Opportunistic Mesh Routing for Wireless Mesh Networks.

ROMER: Resilient Opportunistic Mesh Routing for Wireless Mesh Networks. Y. Yuan, H. Yang, S. Wong, S. Lu, W. Arbaugh, WiMesh workshop, IEEE SECON 2005.

This paper I saw being presented at the 1st WiMesh workshop in Santa Clara in 2005, but I just re-read it recently. I might even have chaired the session it was in. The workshop was consisting of invited papers I believe, and the chairs selected right.

Anyhow, the crux of the paper is: give a packet an amount of credit which is equal to the cost C(S) of transmission times a constant. The cost C(S) is the cost to route from the source S to the destination. At each hop, pick K relays to forward the packet, so that each relay satisfies a credit/cost, where credit is the updated credit (ie. credit minus the cost used up to this point) and cost is the remaining cost at the node to the destination.

The K relays ensure the resilience, as you have a lot of packets going forward. As for the opportunism, it comes in the form of picking the K best relays according to the current conditions, that is: among the potential candidates, pick the K ones with the hight instantaneous rate conditions.

There are some good ideas, and some stuff that I don't necessarily agree with. The credit-based system is simple, but needs nodes to have a cost to each destination, which seems a bit un-scalable. Actually, it depends on how you define the credit and the cost, as a hop-count-based cost is scalable in some situations.

The K-resiliency would seem to me a strange way to optimize the use of the air interface. For instance, the paper does not specify how the K candidates receive the packet. Is it K unicast communications? In which case it is pretty sub-optimal. If it is one broadcast, how do you indicate the K recipients. How do they acknowledge. It runs into the same issues as ExOR basically, without providing the answer.

And then, this K-relaying introduces some race conditions to compete with the channel, between the 2nd transmission of the packet to the 2nd best neighbor, and the 1st neighor transmission of the same packet, but one hop downward. Again, solutions have been proposed (staggered Acks and re-tx) but this would need to be solved explicitly before an actual implementation.

Anyway, this is an interesting read, it triggers lots of good questions and ideas. I did not expect much from the WiMesh workshop as it was a new meeting, but I now have been to the two editions, and it is pretty good everytime.

SenSys'05 conference.

Some notes from the SenSys'05 conference. I attended only tangentially (I assisted at the first keynote, but had to leave right afterwards).


  • Radio Interferometric Geolocation. Miklos Maroti, Branislav Kusy, Gyorgy Balogh, Peter Volgyesi, Andras Nadas, Karoly Molnar, Sebestyen Dora, Akos Ledeczi (Vanderbilt University)
    This paper might be very interesting for multiple applications outside of the sensor network world, despite sensors being its primary focus. The idea is to send two signals with slightly different frequencies from two sources, and two observe the phase of the cumulative signal at the receiver. Based on the phase, the receiver can obtain its distance from the sources. This allow to perform geolocation (localization, positioning) at little cost, since there is no need for GPS or dedicated (cricket-like) hardware.
    Application is obvious for positioning of a handset (caveat: I am not aware of the techniques currently used, it might already be something similar). This could be used for indoor localization, or for emergency services localization. The technique is simple enough so that it could be easily implemented on a handheld device.


  • High-Accuracy, Low-Cost Localization System for Wireless Sensor Network. Radu Stoleru, Tian He, John A. Stankovic, David Luebke (University of Virginia)
    This is a paper on localization, which, unlike the previous one, has no applicability to cellular networks. It uses dedicated hardware to measure the distance of the distributed sensors. Localization cannot be performed in real-time, and requires the sensor nodes to be static.

  • A New Approach for Establishing Pairwise Keys for Securing Wireless Sensor Networks. Arno Wacker, Mirko Knoll, Timo Heiber, Kurt Rothermel (Universität Stuttgart)
    I have not read this paper, its applicability to my work seems limited.

  • TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs. Peter Desnoyers, Deepak Ganesan, Prashant Shenoy (University of Massachusetts Amherst)
    I have not read this paper in details, it is about distributed storage and a hierarchical architecture to perform storage, not unlike a distributed hash table.

  • A Macroscope in the Redwoods. Gilman Tolle, Joseph Polastre, Robert Szewczyk, Neil Turner, Kevin Tu, Stephen Burgess (UCB), David Gay, Phil Buonadonna, Wei Hong (Arch Rock Corporation), Todd Dawson, David Culler (UCB)
    This paper is describes an experimental approach for observing an ecosystem using sensor network. I saw most of the material presented in this paper during a keynote talk by David Culler for the MobiHoc 2005 conference. The sensor network is used to extract data streams, such as the temperature and humidity in a redwood grove as a function of the elevation into the tree. Very interesting as an environment monitoring tool.

  • Design and Deployment of Industrial Sensor Networks: Experiences from the North Sea and a Semiconductor Plant. Robert Adler, Phil Buonadonna, Jasmeet Chhabra, Mick Flanigan, Lakshman Krishnamurthy, Nandakishore Kushalnagar, Lama Nachman, Mark Yarvis (Intel)
    This is the poster child for sensor network in industrial monitoring application. It is the example given by Intel people as to what is the application for sensor networking. I was surprised to see this presented here, as I was convinced it had been presented already. Basically, it describes the deployment of a sensor network for monitoring instrumentation on an oil tanker, and for monitoring vibrations in a silican wafer manufacturing plant. In both cases (for which the cost of wiring is extremely harsh), the wireless sensor network proves superior.

  • A Unifying Link Abstraction for Wireless Sensor Networks. Joseph Polastre, Jonathan Hui, Philip Levis (University of California, Berkeley), Jerry Zhao (ICSI Berkeley), David Culler (University of California, Berkeley), Scott Shenker (ICSI Berkeley and University of California, Berkeley), Ion Stoica (University of California, Berkeley)
    This paper argues that the "narrow waist" of the protocol stack for sensor networks should be at the link layer, and not at the network layer, as it is in IP.

  • Z-MAC: A hybrid MAC for wireless sensor networks. Injong Rhee, Ajit C. Warrier, Mahesh Aia, Jenogki Min (NCSU) Prashant Patel (Progress Energy)
    I did not look into the details of the protocols, but it looks like another TDMA-CSMA hybrid. It might be interesting, but it is definitely not exciting.

  • Packet Combining in Sensor Networks. Henri Dubois-Ferriere (EPFL), Deborah Estrin (UCLA), Martin Vetterli (EPFL)
    The very good idea of the paper is relatively simple: when transmitting a packet over multiple hop, say from A to B to C, C might overhear some of the transmission from A to B. Thus it does not require a perfect transmission from B to C, but just enough so that it can reconstruct the packet. Of course, using coding so that the packet from A to B is the original data, and the packet from B to C is some packet of same length made of parity bits, which allows a more efficient decoding at C than just transmitting the orginal packets twice between A and B and between B and C.

    [update: these notes were written in 2005. But as I put them here, I see that I just read a paper by Ed Knightly's group to appear in Infocom'07 which puts out a very similar idea.]

  • Siphon: Overload Traffic Management using Multi-Radio Virtual Sinks. Chieh-Yih Wan (Intel Research), Shane B. Eisenman (Columbia University), Andrew T. Campbell (Dartmouth College), Jon Crowcroft (Cambridge University)
    The idea is to detect congestion in the network, which is typically due to funneling effect: all the traffic fans in towards the data sink. The combat this fanning in issue, another sink, using a different radio connection towards the initial sink, can be placed in the network to redirect traffic.
    This is a relatively obvious result: you add capacity where you need it. I fear it might be a technique with limited applicability: coming up with new hardware to reduce traffic in a hot spot might make sense in a static sensor network, but not in a dynamic environment. I guess it all depends on the granularity of the time scales.

  • Estimating Clock Uncertainty for Efficient Duty-Cycling in Sensor Networks. Saurabh Ganeriwal (University of California Los Angeles), Deepak Ganesan (University of Massachusetts), Hohyun Shim, Vlassios Tsiatsis, Mani B. Srivastava (University of California Los Angeles)
    I did not go into the details of this one. Power saving technique might have application outside of the sensor network universe, but it seems cellular clock sync is working. It also seems that the dynamic aspect of the cellular environment might prove too challenging for these techniques.

  • Firefly-Inspired Sensor Network Synchronicity with Realistic Radio Effects. Geoffrey Werner-Allen, Geetika Tewari, Ankit Patel, Matt Welsh, Radhika Nagpal (Harvard University)
    This paper present a node synchronization protocol based on locking in phase with ones neighbor, a process similar to that of the dance of the fireflies. It is cute, but the real-world applicability seems far away.

  • Data Collection, Storage and Retrieval with an Underwater Optical and Acoustical Sensor Network. Iuliu Vasilescu, Keith Kotay, Daniela Rus (Massachusetts Institute of Technology), Peter Corke, Matthew Dunbabin (CSIRO Australia)
    Under-water sensor network is a challenging application, but not of much interest to me.

  • MAX: Human-Centric Search of the Physical World. Kok Kiong Yap, Vikram Srinivasan, Mehul Motani (National University of Singapore)
    This is a paper to allow localization of object using a taxonomy which is easily understandable by a human user. This might be of interest to the UI community.

  • CenWits: A Sensor-Based Loosely-Coupled Search and Rescue System using Witnesses. Jyh-How Huang, Saqib Amjad, Shivakant Mishra (University of Colorado, Boulder)
    The paper describes a search and rescue systems which functions as follows: people wear tags (802.11 based, but one assume RFID could be an alternative technology); some readers are distributed in the environment, and take a reading of the tags. When a catastrophe occurs (say, an avalanche), the tag readings are used to locate the users.
    There is a huge privacy issue here. On the other hand, the system could be integrated on a cell phone, so that the 'tag' becomes the handheld device. I guess it is practical of people of University of Colorado to come up with avalanche rescue systems. I wish I could work there, the experimentation must be fun!

  • Cyclops: In Situ Image Sensing and Interpretation in Wireless Sensor Networks. Mohammad Rahimi (Center for Embedded Networked Sensing - UCLA), Rick Baer (Agilent Technology), Obimdinachi I. Iroezi, Juan C. Garcia (Center for Embedded Networked Sensing - UCLA), Jay Warrior (Agilent Technology), Deborah Estrin, Mani Srivastava (Center for Embedded Networked Sensing - UCLA)
    This is a sensor network platform which carries a low resolution digital camera, ie a so-called image sensor. This is made in partnership with Agilent Technology.

  • Lightweight Detection and Classification for Wireless Sensor Networks in Realistic Environment. Lin Gu (University of Virginia), Dong Jia (Carnegie Mellon University), Pascal Vicaire, Ting Yan, Liqian Luo, Tian He (University of Virginia), Ajay Tirumala (University of Illinois at Urbana-Champaign), Qing Cao, John. A. Stankovic, Tarek Abdelzaher (University of Virginia), B.H. Krogh (Carnegie Mellon University)
    This paper I did not read in details.

  • Intelligent Light Control using Sensor Networks. Vipul Singhvi, Andreas Krause, Carlos Guestrin, Jim Garrett, H. Scott Matthews (Carnegie Mellon University)
    I think the title says it all: it is a sensor network which controls a lighting system. It seems very application specific.

  • Algorithms for Generic Role Assignment in Wireless Sensor Networks. Christian Frank, Kay Roemer (ETH Zurich)
    This paper (and the next two as well) I did not go into details, as it deals with software support for sensor network, a topic I am not familiar with.

  • VM*: A Scalable Runtime Environment for Sensor Networks. Joel Koshy, Raju Pandey (University of California, Davis)
    I did not go into the details of this paper, as it deals with software support for sensor network, a topic I am not familiar with.

  • Sympathy for the Sensor Network Debugger. Nithya Ramanathan, Kevin Chang, Lewis Girod, Rahul Kapur, Eddie Kohler, Deborah Estrin (UCLA)
    I did not go into the details of this paper, as it deals with software support for sensor network, a topic I am not familiar with.

Anti-Packets in Wireless Multi-Hop Network

The Anti-Packets Can Increase the Achievable Throughput of a Wireless Multi-Hop Network, by Petar Popovski and Hiroyuki Yomo, of Aalborg University, ICC'06.

The idea is very simple, and it is one of the few applications that I have seen of the field of network coding, which may actually work. Surprisingly enough, the authors don't seem to know what network coding is. At least, they don't mention it.

This paper came at about the same time as the COPE paper from Dina Katabi's group at MIT which presents the same idea with a slightly different approach (COPE is more general). I would guess that, due to the MIT reputation and the publication forum, the citation count for COPE will outnumber the number of citations for this paper by a factor 100, even though both had the same idea at about the same time.

Assume a multi-hop wireless network with 3 nodes, A, B and C. A needs to send a packet AC to C, and C a packet CA to A (both packets of same length and errorless transmissions). The current way of doing it is in 4 time slot: 1-AC from A to B; 2-CA from C to B; 3-AC from B to C; 4-CA from B to A.

The idea of the paper is to combine 3 and 4, and to use the fact that since A and C both hear B, the last step can be: 3-B transmits ACA = XOR(AC,CA) to both B and A.

A, using ACA and AC, can recover CA, and similarly C can recover AC. The gain in bandwidth is 33%.

The throughput can be doubled in an analog manner: if A and C transmit at the same time (slot 1), then B hears a signal composed of both A and C components, A+C. Assuming B hears them both with the same power, then it can just replay the signal it heared in slot 1 in slot 2. It acts only as an amplificator. A hears A+C and can substract its own component to recover the packet from C, and C similarly. Bandwidth gain is 100%.

There are a lot of implementation issues and assumptions that should be debated (it requires symmetric traffic from A to C and C to A, at least bit-wise; synchronization; error recovery mechanisms look more complicated; etc) but the main idea is quite sexy.

Cooperative Communication in Wireless Networks

Cooperative Communication in Wireless Networks, Aria Nosratinia, Todd E. Hunter, Ahmadreza Hedayat, IEEE Communications Magazine, October 2004. Authors with UT Dallas, and Nortel Networks.

The paper is a survey of the field on cooperative networks. Surprisingly enough, it is a short survey, meaning the field is still very new. It has only 11 references. This survey cites the origin of cooperative network in the paper of Cover and El-Gamal (capacity theorems for the relay channel, IEEE Trans. Info Theory, 1979).

The survey categorizes four different types of cooperation: detectand forward; amplify and forward; decode and forward; coded cooperation.
  • The first one pairs nodes in a pair, and each partner's node attempts to detect the partner's bits and then retransmits the detected bits.
  • Amplify and forward is more of an analog version of the first one: the partner receives a signal, and amplifies it. The receiver then decodes based on the signal received fromt the source and from the source's partner.
  • Decode and forward is a variation of the first one, in which the partner encodes the packet from the source differently, so that the receiver has to different encoding to reconstruct the packet from (similar to the sensys paper by Dubois-Ferriere here, where I found this reference.)
  • Coded cooperation: each user sneds different portion of each user's code word via two independent fading paths. I guess at the simplest, it is the decode-and-forward above.


Performance is strongly improved by cooperation in terms of block error rate.

Paper is low level technical reading (no complex analysis or concepts) and clearly presented. It'd be interesting to know what applications --if any-- Nortel has in mind for this work.

Characterisation of the Performance of Cooperative Networks in Ricean Fading Channels

J. Adeane, M. R. D. Rodrigues and I. J. Wassell, Characterisation of the performance of cooperative networks in Ricean fading channels, Proceedings of the International Conference on Telecommunications, Cape Town, South Africa, May 2005.

Another paper on my list of cooperative network channels. This ones looks at the performance in Ricean fading channels, and consider a simple network topology with two cooperating nodes trying to send data to a single destination, ie. the source-relay-destination triad.

Benefits of cooperation depends on the channel quality between the source and destination and between the source and relay. The paper actually shows a spot in which there is a degradation of the performance created by the cooperation (ie. the information provided by the relay to the destination does not improve that provided by the source, but even interferes with it or contradicts it).

Paper is relatively simple and results are relatively straight-forward, but a good read nonetheless.

Two papers on ad hoc routing

These two papers were suggested by a reviewer of a paper of mine regarding opportunistic routing. They associated these papers with my results, which means I should have differentiated the results better, as it is pretty obvious to me they have little in common.

Anyhow, I have now read the papers:
AODV-BR: Backup Routing in Ad Hoc Networks, Sung-Ju Lee and Mario Gerla, in Proceedings of IEEE WCNC 2000.

On-Demand Multipath Distance Vector Routing in Ad Hoc Networks, Mahesh Marina, Samir Das, in Proceedings of IEEE ICNP, November 2001.

AODV-BR: Backup Routing in Ad Hoc Networks, Sung-Ju Lee and Mario Gerla, in Proceedings of IEEE WCNC 2000.

This paper introduces an extension of AODV which allows for graceful recovery when a link breaks. It uses the query-reply mechanism to build additional back-up routes at nodes adjacent to the route in the path.

Once a link on the route breaks, it can be temporarily recovered using the alternate path, while a route error is sent to the source and a new route discovery is triggered.

The idea is simple and practical, and works fine. It does bring improvement over AODV. The key difference with opportunistic routing is that the alternate is used after a broken link, and is not used otherwise. This means that if the route is reliable, but better links appear later on, these are ignored.

On-Demand Multipath Distance Vector Routing in Ad Hoc Networks, Mahesh Marina, Samir Das, in Proceedings of IEEE ICNP, November 2001.

This is similar to the previous paper, except that it computes link disjoint paths between source and destination. Same as above, the query-reply mechanism of the route discovery is used to set up the multiple link disjoint paths (as opposed to just associated nodes along the path as in the previous paper). In case of link failure, the traffic fails over to another path.

Again, it obviously outperforms AODV. Again, it does not make use of the promiscuous nature of the air interface to forward packets further.

Opportunistic Multi-Hop Routing for Wireless Networks

This are some old notes, but I am consolidating here.

Opportunistic Multi-Hop Routing for Wireless Networks, Sanjit Biswas, Robert Morris, SIGCOMM 2005. Best Paper Award.

This is a continuation upon the work of Biswas and Morris, Opportunistic Routing in Multi-Hop Wireless Networks in HotNets-II, 2003. The previous work is the one which triggered my own work on opportunistic routing, which is basically to adapt Biswas and Morris's to dynamic network using an adapted version of AODV.

The paper presents an extension of the opportunistic routing from the previous paper. However, it seems that there were issues in applying the protocol to the RoofNet network, a mesh network based on 802.11 APs deployed by Morris's group in Cambridge, MA, as the protocol presented here is vastly different from the previous protocol (given also that this came out in 2005, while the previous paper was presented in 12/2003).

One issue is that: all packets end up not being accounted for, and the new paper proposes the opportunistic routing on a batch of paper, and default back to traditional routing once a 90% threshold has be attained for delivery. This is a bit cumbersome, and especially means that no real-life live traffic can be used using this. Heavy file transfers only.

Also, the MAC protocol from the earlier paper (with a RTS/CTS mechanism designed for opportunistic routing) is replaced by a different scheduled MAC, with no specific RTS/CTS. A ExOR header is added which contains information pertaining to the potential relays. This information is based on measurement and probes performed beforehand.

The node which forwards the packet is the closest to destination using an ETX (expected retransmission) metric, ie. a path is chosen not on the number of hops but on how many times the packet would be re-tx-ed to get to the destination. The fewer the re-tx, the closer.

Main issues/challenges regarding my own work: this is highly static. ETX requires to have a good knowledge of the routes a priori, which works if stuff does not move. Then sending a large batch and wait until 90% is transmitted implies that you have massive re-ordering. And some time on your hand, which again means static.

Concern: are the changes to the original protocol (hotnet-II 2003) made because the MAC layer was not accessible in their off-the-shelf 802.11 testbed, or was it made because of performance. The changes move everything up at the network level, so implementation could have driven it.

Performance evaluation of my own protocols will tell...

Opportunistic Packet Scheduling and Media Access Control for Wireless LANs and Multi-Hop Ad Hoc Networks.

Opportunistic Packet Scheduling and Media Access Control for Wireless LANs and Multi-Hop Ad Hoc Networks. J. Wang, H. Zhai and Y. Fang, WCNC 2004.

This paper proposes to perform a scheduling similar to proportional fairness in multi-hop networks, by sending a CTS to a list of potential relays (those whose packets are waiting for transmission in the output buffer of the node), receiving the RTS with an indication of current signal strength, and picking the "best" node, according to a metric similar to that of proportional fair (based on the achieved throughput to each node up to this time).

The idea is straight-forward, but quite good actually (by which I mean: this should be a better-than-WCNC paper). It should be obvious that, as soon as several packets are awaiting transmission at any node, then the node has the choice to pick which one to transmit based on instantaneous conditions.

Performance gains shown, obviously. Little analysis, simulation only.

The Critical Transmitting Range for Connectivity in Sparce Wireless Ad Hoc Networks

This paper: The Critical Transmitting Range for Connectivity in Sparce Wireless Ad Hoc Networks P. Santi, D. Blough, IEEE Trans on Mobile Computing, Vol.2,No.1,Jan-Mar 2003, is excellent.

It computes the asymptotic relation between the range r, the length l and the number n of nodes uniformly (randomly) distributed in a square area of side length l with connectivity range r. It establish the value for r for which the graph is connected with high probability, or disconnected with high probability.

The main result is: in one dimension, rn > 2l log(l) ensures connectivity, rn < (1-epsilon) l log(l) ensures disconnectivity.

In higher dimension (2 and 3), connectivity is assured for r^d n > k l^d log(l), with k a constant which depends on the dimension. If r^d n < l^2, then the graph is disconnected.

There is a gap in dimension 2 and 3 wrt the asymptotes. Also, paper conjectures that in dim 1, rn < 2l log(l) should ensure disconnectedness.

Most of the proofs come from a pigeon-hole argument, which is relatively simple and elegant.

Monday, February 19, 2007

A High Throughput Path Metric for Multi-Hop Wireless Routing.

Douglas DeCouto, Daniel Aguayo, John Bicket, Robert Morris, Wireless Networks 11, 419-34, 2005.

I had read a version of this paper a little while ago, but some reviewing duty sent me back to it to confirm a thing or two. It introduces the ETX measure, which is an additive measure to find the path between two nodes on a wireless mesh network which minimizes the number of retransmission.

That it is additive is a good thing, and it does not have the issues of other metric: shortest hop count might find long links with bad quality and the actual performance of shortest path is demonstrated in the paper to be quite poor. Using a product of delivery ratio (which is additive too if you take the logs) or the minimal PDR (again, additive in a (max,+) algebra) will favor multi-hop links with good PDR over a single link with lesser PDR, but the relaying interference of the multi-hop path will divide the bandwidth of the path by two, so it is not a good metric either.

ETX computes the delivery rate of a link in each direction and is computed as the inverse of the product of these rates. The higher the ETX count, the more the expected the number of re-transmission, so the routing should choose the path with the lowest cumulative ETX. Performance is actually improved, especially as path are longer.

What is nice about the paper: it is very nicely explained. It is thinner in technical content than I remembered (basically it is the idea of the ETX metrics which matters) but each choice is very nicely argued and motivated, and each decision made very clear. So it is a very nice read.

The main thing I am now taking from the paper is that the performance of the metric is the invert of the packet delivery in both directions. However, one direction is the ACK direction, and this can be improved by sending mutliple ACKs, or by adding enough redundancy in the ACK to decode it successfully every time. Then your ETX drop. For instance, if you have 1/2 pdr in each direction, your ETX is 4. If you add enough coding redundancy so as to decode the Ack succesfully every time, then your ETX drops to 2. That's a pretty major gain. It is obvious, but the paper does not suggest doing it.

Object Replication Strategies in Content Distribution Networks

Jussi Kangasharju, James Roberts, Keith Ross, Computer Communication Journal, 25 (2002) 376-83.

This papers looks at the Autonomous System level distribution of content (such as Akamai content distribution network, CDN, for instance) and how to replicate the content so that, constrained on a limited memory space, the distance from the end user to the looked-for data object is minimized.

The paper shows that this is a variant of the knapsack problem and thus is NP-Complete and cannot be solved. But they provide some heuristics which show that a network wide collaboration (that is, a global management of the CDN) best all the other heuristic strategies (random, popularity-based, greedy single and greedy global). Greedy single is the strategy of replicating the content most favored by the client of each AS. Greedy global is the strategy of finding over all AS the object which replication would decrease the cost the most, and iterate until all storage space is filled.

The paper is interesting, especially since the NP-completeness of the CDN problem they consider basically includes the NP-completeness of the problem I am considering, and thus relieves me from theoretical work: all I have to do is to evaluate the proper heuristics. This also makes this paper a very easy read, as there is no equation, only the comparison of the performance of the different strategies.

Sunday, February 18, 2007

Little Tom Thumb Went Straight Home.

C. Westphal, Little Tom Thumb Went Straight Home, in Proc. of Infocom 07, Anchorage, Alaska.

Content and Service Replication Strategies in Multi-hop Wireless Mesh Networks.

by Shudong Jin, Limin Wang. MSWiM'05, Oct. 10-13, Montreal, Quebec. Paper here.

The paper studies the impact of the replication of content in a wireless mesh networks. They show that in a 2-d mesh network, the optimal number of replicas of an object is $p^(2/3)$, where $p$ is the access probability of the object.

The result is pretty, as it is a bit counter-intuitive (a replication rate proportional to the access probability is what would seem natural), and the derivation is actually rather simple and straightforward.

The paper also opens research questions into: how to account for the wireless physical layer. In the paper, it is abstracted as a way to define the connectivity, and thus the topology, of the network, but the actual impact of the wireless interface on the placement of the replicated content could have some other implications. An interesting idea to look into.