Wednesday, February 28, 2007

Losing Opportunism: Evaluating Service Integration in an Opportunistic Wireless System.

Losing Opportunism: Evaluating Service Integration in an Opportunistic Wireless System. Hongseok Kim, Gustavo de Veciana, to appear in Proc. of IEEE Infocom'07.

The Infocom'07 program has been announced and you can find some of the papers here and there is you google around. I read this one, as I have been interested in opportunism and its benefit. I believe that the main function of opportunism stays in its ability to accommodate mobility: by having a flexible list of potential suitors to relay your packet, you can adapt the list to the topology dynamically. This paper somewhat pokes holes in the other benefit of opportunism: bandwidth gain.

The idea of the paper (and I will not go in the technical details, which are, unlike the intuition behind them, rather complicated) is that there is a benefit to use opportunistic mechanism, which has been quantified in a single hop, cellular-like wireless system. Basically, by picking the user with the best instantaneous condition, that user will be out-doing its average rate, and you will increase the capacity of the system. You can quantify how much this is true.

However, this comes at the cost of fairness: picking the best guy at some time, also means not picking some guy, potentially for extended periods of time. This is where Proportional Fairness for instance comes in. This is also where this paper come in, but from a different angle: it asks the question, not how to provide fairness to the users, but what if a user demands some type of service: a user with a certain level of QoS.

Users with QoS require the channel to be allocated to them whether or not a better (best effort) transmission candidate exists. The paper provides some analytical model to describe this, and some simulation results (based on their analysis). Basically, it shows that the stability region is reduced by QoS users (in the extreme, you cannot provide more than your average capacity C to a QoS user because it depends on other users being there; so if you have N channels, the capacity is CN with N QoS users, instead of the much greater capacity of an opportunistic system.

It is very intuitive that providing strict QoS requirement will cause a loss in flexibility in the choice of the node to communicate with, and thus a degradation in the performance that this flexibility provide. It is good to have an analytical framework to quantify this. I would say it is best paper award territory (based on my very partial view of the Infocom'07 paper stack).

It is particularly important to have this kind of intuition described clearly: the impact of a single QoS user is a great degradation in the system performance. One application of wireless mesh networks is to replace broadband services and to provide VoIP over these networks (say as a way for a city to repay the wireless network investment by switching its phone bill away from cellular operator to VoIP over the handset). Well, if your scheduling algorithm in the backhaul uses some similar kind of opportunism, the effect this paper describes starts to kick in: either you have higher priority to your VoIP flows, and then you lose the opportunistic effect; or you don't, and then you do get unreliable performance for your VoIP call. It boils down to a network design issue, but seems to say that you cannot put opportunism gain in the bank too much. Something to keep in mind.

Tuesday, February 27, 2007

Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden terminals Using A Single Transceiver.

Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden terminals Using A Single Transceiver. J. So, N. Vaidya, in Proc. of ACM MobiHoc 2004, Roppongi, Japan.

I read the paper, but it was a short read: it has been cited and the principles explained in a few other articles, so I just basically had to check the paper existed. But the read was fast: I had an idea what it was about, I did not have to be convinced and scratch my head too much. Plus, the idea is simple and elegant, as more often than not with Nitin.

What is not usually explained: the actual channel selection (how two nodes decide during the ATIM window to pick one channel over the other; I kinda assumed it would be a round-robin or random or something uniform like that. It is actually based on the number of previous nodes requesting the channel for transmission and a list of preference based on channel quality). And the performance analysis: mostly a comparison with DCA, a multi-channel MAC with multiple interfaces (MMAC uses only one interface). Good stuff all around.

The unrelated note: Roppongi means 6 trees.

Optimizing the Placement of Internet TAPs in Wireless Neighborhood Networks

Optimizing the Placement of Internet TAPs in Wireless Neighborhood Networks, R. Chandra, L. Qiu, K. Jain, M. Mahdian. ICNP'04.

This is strange, the version I downloaded has the first two authors inverted. It is titled Optimizing the Placement of Integration Points in Multi-Hop Wireless Networks, but it really looks like the same thing. Maybe a journal version or something. I am putting up the reference in the actual ICNP'04 proceedings, but I read the other version.

Anyhow, the paper is a follow-up to On the Placement of Web server replicas, whom Lily Qiu also co-authored. The funny thing is that I am basically following the same thought process: a student is looking into designing CDNs into wireless mesh networks, and has me reading this literature so I can be useful. Then I think: hey! CDN servers, or wired gateway, same thing, that's where the data is coming from. So on from CDN servers to "integration points" and "wired access points" which lead to a google search, where we found this. Only 3 years too late, of course.

The paper follow the same architecture as the previous one: convince the reader that you cannot really solve the problem, it's NP-hard, so you'll provide a set of heuristic strategies and test and compare them. The strategies are similar: random, greedy, etc. Of course, the wireless set-up introduces all kind of kinks which make the paper interesting and the contribution significant over the previous work.

There is a proof of the closeness of the strategy to the optimal solution, and a great match between a computable lower bound and the greedy strategy (which both book-end the optimal strategy), so the solutions proposed here are working well.

Having said enough good about the paper, I can share my little giggle. My version of the paper includes the following sentence:

In comparison [to TDMA scheduling schemes], in this paper we look at more general and efficient MAC schemes such as IEEE 802.11

More general, maybe, but more efficient? The MAC layer ends up being abstracted away sooner afterwards, so it does not really matter.

Monday, February 26, 2007

Multichannel MAC Protocols for Wireless Networks.

Multichannel MAC Protocols for Wireless Networks Ritesh Maheshwari, Himanshu Gupta, Samir Das, In Proc of IEEE SECON 2006, Reston, VA, Sept 2006

This paper I don't think I saw at the conference, even though I attended and presented a paper. I had to leave early to go to another conference, maybe that's how I missed it.

Anyhow, now I caught up with it, and it is one of these papers that are nice to read in the morning as you don't need to much caffeine to grasp what it is about: the papers present two protocols to access multiple channel with a single transmission interface (and in one case, a busy tone interface). The protocols are intuitive enough, the motivations are clear, the validation show the promised improvement. There is no analysis, just a simulation validation, so it is 8 pages of light reading.


The two protocols that are described are some variation of other protocols, which is interesting to provide some context for these other protocols. The two protocols they improve upon are RDT and MMAC. MMAC is a protocol which uses a mechanism similar to the power save mechanism of 802.11 to define a control time slot and a transmission phase. The control slot is used for the nodes to agree on which channel to switch to for transmission. Nodes should be synchronized to know when the control time slot is and when the transmission phase ends. All nodes meet on the same channel during the control slot.

RDT (Receiver Directed Transmission) is a protocol for which each nodes select a "quiescent channel", which is the channel where it will receive transmission. It switches to the quiescent channel of the receiver when it wishes to transmit to that receiver. Allocation of quiescent channel can be done off-line or dynamically.

One more protocol is mentioned in the paper: DCA. DCA (Dynamic Channel Assignment) is a two-interface protocol: one interface is the control interface, used by the nodes to schedule to which channel they will switch their other interface, which they use for the actual transmission.

Ok: that's the background. The contribution: the paper introduces a version of xRDT which is extended with a busy tone interface (a simpler-to-implement interface) and an explicit notification of the termination of the transmission, so as to reduce the deafness of the terminals (a terminal is deaf when not on his quiescent channel; a would-be transmitter needs to explicitly know when his intended receiver returns to its quiescent channel).

The paper also introduces another protocol, Local Coordination-based Multichannel (LCM)MAC, which works on a single interface and is a self-clocking (ie. nodes synchronize themselves) version of MMAC. Nodes define a transmission schedule which include a control phase and a data transmission phase. The first one to win the race to set-up the schedule wins and the other act according to that schedule.

Performance analysis is straightforward. The only odd part is that RDT is not included (DCA, MMCA and the two new protocols, xRDT and LCM-MAC are). I would assume xRDT offering an improvement on RDT would be subject to validation. Other than that, it is only a simulation, which shows the benefit of multiple interfaces over single interface, and the benefit of the new protocols over the legacy protocols.

Wednesday, February 21, 2007

MAC-Layer Anycasting in Ad Hoc Networks.

MAC-Layer Anycasting in Ad Hoc Networks Romit Roy Choudhury and Nitin Vaidya. ACM SIGMCOMM Computer Communications Review, Vol. 34, number 1, Jan. 2004.

This is a paper I should have read a long time ago about multi-hop wireless mesh networks. It is deceiptively simple: the idea is to create a set of equivalent paths from the source to the destination using the route table information (the paper assumes a table-driven routing protocol) using a hop-count metric. If the metric is such tat no routes are strictly equivalent, routes could also be ordered and the set created from the list of routes satisfying a given criteria, say within some factor of the best path according to that metric.

Anyhow, once the route set is created at the network layer, the packet is given to the MAC layer with a list of potential candidates. The MAC layer then chooses which candidate to use based on the information available at the MAC layer. So if there are two possible candidates to send the packets to, the MAC chooses the one with the best condition (for example, availability, if one has sent a CTS and the other has not; or link rate and link conditions).

So the packet's route is not set a priori, but it is evolving as the packet moves along. The packet is not duplicated, there is still only one copy forwarded at each hop, but it makes use of the diversity available at each hop at this point in time.

A few notes: the benefit is not evaluated at all in this paper, which is a huge surprise to me. It is typically a requirement from any reviewer I have encountered. I am sure there is a subsequent performance evaluation somewhere, I just haven't looked for it.

But it would actually be interesting to see the performance improvement. On a short path, there would not be any. And the performance of long path is degraded by the fact that the multiple path are long. It might improve on a basic routing protocol, but it would improve on poor performance in the first place. Anyhow, I should look for that performance evaluation follow-up which must be somewhere.

The paper discusses using reactive protocols, such as DSR, but the solution is actually rather convoluted: instead of collecting a route, the RREQ-RREP exchange must collect a whole tree routed at the destination (for the uplink) so that each node can know the place of its neighbor in the relaying structure toward the destination. So that means that each RREQ-RREP must be broadcast more than strictly necessary, and extra-flooding is usually viewed as a bad thing.

Anyhow, this is really light reading. It is only the description of the idea, with no analysis nor performance evaluation, so it is a quick and easy and interesting read.

On the Placement of Web Server Replicas

On the Placement of Web Server Replicas, L. Qiu, V. Padmanabhan, G. Voelker, IEEE Incofom 2001, 1587-96.

This paper is definitely the inspiration for this one or the other way around. One is a journal paper from 2002 and the other a conference paper from 2001, so I would have to see which one came out when.

Anyway, it covers much of the same ground: placing web server replicas to form a Content Distribution Network (CDN) is a problem which is hard, and the paper presents some heuristic algorithms to solve the problem using an AS topology (and other topologies).

The goal is to improve on the result of a paper which proves the optimal placement on a tree topology (B. Li, M. Golin, G. Ialiano, X. Deng, "on the optimal placement of web proxies on the internet", infocom 1999): the internet is a mesh, not a tree, so using the tree optimal placement result in a sub-optimal internet placement.

The heuristics are similar to here: random, greedy (pick the best spot for a replica that minimize the cost, then iterate taking into account the already place replicas), hot spot (put the replicas near the sites which generate the most traffic) and super-optimal algorithm (which is super-optimal not because it is really super-duper, but because it might not feasible, thus might perform better than the optimal; it is solved using some Lagrangian relaxation of the integer programming problem defined by the problem).

There is a neat evaluation section, over a lot of different topologies and which shows that the greedy algorithm behaves pretty good. One really cool link from the paper is the NP problem compendium web page. I mean, once you think to add the tilde to the reference provided in the paper.

Tuesday, February 20, 2007

ROMER: Resilient Opportunistic Mesh Routing for Wireless Mesh Networks.

ROMER: Resilient Opportunistic Mesh Routing for Wireless Mesh Networks. Y. Yuan, H. Yang, S. Wong, S. Lu, W. Arbaugh, WiMesh workshop, IEEE SECON 2005.

This paper I saw being presented at the 1st WiMesh workshop in Santa Clara in 2005, but I just re-read it recently. I might even have chaired the session it was in. The workshop was consisting of invited papers I believe, and the chairs selected right.

Anyhow, the crux of the paper is: give a packet an amount of credit which is equal to the cost C(S) of transmission times a constant. The cost C(S) is the cost to route from the source S to the destination. At each hop, pick K relays to forward the packet, so that each relay satisfies a credit/cost, where credit is the updated credit (ie. credit minus the cost used up to this point) and cost is the remaining cost at the node to the destination.

The K relays ensure the resilience, as you have a lot of packets going forward. As for the opportunism, it comes in the form of picking the K best relays according to the current conditions, that is: among the potential candidates, pick the K ones with the hight instantaneous rate conditions.

There are some good ideas, and some stuff that I don't necessarily agree with. The credit-based system is simple, but needs nodes to have a cost to each destination, which seems a bit un-scalable. Actually, it depends on how you define the credit and the cost, as a hop-count-based cost is scalable in some situations.

The K-resiliency would seem to me a strange way to optimize the use of the air interface. For instance, the paper does not specify how the K candidates receive the packet. Is it K unicast communications? In which case it is pretty sub-optimal. If it is one broadcast, how do you indicate the K recipients. How do they acknowledge. It runs into the same issues as ExOR basically, without providing the answer.

And then, this K-relaying introduces some race conditions to compete with the channel, between the 2nd transmission of the packet to the 2nd best neighbor, and the 1st neighor transmission of the same packet, but one hop downward. Again, solutions have been proposed (staggered Acks and re-tx) but this would need to be solved explicitly before an actual implementation.

Anyway, this is an interesting read, it triggers lots of good questions and ideas. I did not expect much from the WiMesh workshop as it was a new meeting, but I now have been to the two editions, and it is pretty good everytime.

SenSys'05 conference.

Some notes from the SenSys'05 conference. I attended only tangentially (I assisted at the first keynote, but had to leave right afterwards).


  • Radio Interferometric Geolocation. Miklos Maroti, Branislav Kusy, Gyorgy Balogh, Peter Volgyesi, Andras Nadas, Karoly Molnar, Sebestyen Dora, Akos Ledeczi (Vanderbilt University)
    This paper might be very interesting for multiple applications outside of the sensor network world, despite sensors being its primary focus. The idea is to send two signals with slightly different frequencies from two sources, and two observe the phase of the cumulative signal at the receiver. Based on the phase, the receiver can obtain its distance from the sources. This allow to perform geolocation (localization, positioning) at little cost, since there is no need for GPS or dedicated (cricket-like) hardware.
    Application is obvious for positioning of a handset (caveat: I am not aware of the techniques currently used, it might already be something similar). This could be used for indoor localization, or for emergency services localization. The technique is simple enough so that it could be easily implemented on a handheld device.


  • High-Accuracy, Low-Cost Localization System for Wireless Sensor Network. Radu Stoleru, Tian He, John A. Stankovic, David Luebke (University of Virginia)
    This is a paper on localization, which, unlike the previous one, has no applicability to cellular networks. It uses dedicated hardware to measure the distance of the distributed sensors. Localization cannot be performed in real-time, and requires the sensor nodes to be static.

  • A New Approach for Establishing Pairwise Keys for Securing Wireless Sensor Networks. Arno Wacker, Mirko Knoll, Timo Heiber, Kurt Rothermel (Universität Stuttgart)
    I have not read this paper, its applicability to my work seems limited.

  • TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs. Peter Desnoyers, Deepak Ganesan, Prashant Shenoy (University of Massachusetts Amherst)
    I have not read this paper in details, it is about distributed storage and a hierarchical architecture to perform storage, not unlike a distributed hash table.

  • A Macroscope in the Redwoods. Gilman Tolle, Joseph Polastre, Robert Szewczyk, Neil Turner, Kevin Tu, Stephen Burgess (UCB), David Gay, Phil Buonadonna, Wei Hong (Arch Rock Corporation), Todd Dawson, David Culler (UCB)
    This paper is describes an experimental approach for observing an ecosystem using sensor network. I saw most of the material presented in this paper during a keynote talk by David Culler for the MobiHoc 2005 conference. The sensor network is used to extract data streams, such as the temperature and humidity in a redwood grove as a function of the elevation into the tree. Very interesting as an environment monitoring tool.

  • Design and Deployment of Industrial Sensor Networks: Experiences from the North Sea and a Semiconductor Plant. Robert Adler, Phil Buonadonna, Jasmeet Chhabra, Mick Flanigan, Lakshman Krishnamurthy, Nandakishore Kushalnagar, Lama Nachman, Mark Yarvis (Intel)
    This is the poster child for sensor network in industrial monitoring application. It is the example given by Intel people as to what is the application for sensor networking. I was surprised to see this presented here, as I was convinced it had been presented already. Basically, it describes the deployment of a sensor network for monitoring instrumentation on an oil tanker, and for monitoring vibrations in a silican wafer manufacturing plant. In both cases (for which the cost of wiring is extremely harsh), the wireless sensor network proves superior.

  • A Unifying Link Abstraction for Wireless Sensor Networks. Joseph Polastre, Jonathan Hui, Philip Levis (University of California, Berkeley), Jerry Zhao (ICSI Berkeley), David Culler (University of California, Berkeley), Scott Shenker (ICSI Berkeley and University of California, Berkeley), Ion Stoica (University of California, Berkeley)
    This paper argues that the "narrow waist" of the protocol stack for sensor networks should be at the link layer, and not at the network layer, as it is in IP.

  • Z-MAC: A hybrid MAC for wireless sensor networks. Injong Rhee, Ajit C. Warrier, Mahesh Aia, Jenogki Min (NCSU) Prashant Patel (Progress Energy)
    I did not look into the details of the protocols, but it looks like another TDMA-CSMA hybrid. It might be interesting, but it is definitely not exciting.

  • Packet Combining in Sensor Networks. Henri Dubois-Ferriere (EPFL), Deborah Estrin (UCLA), Martin Vetterli (EPFL)
    The very good idea of the paper is relatively simple: when transmitting a packet over multiple hop, say from A to B to C, C might overhear some of the transmission from A to B. Thus it does not require a perfect transmission from B to C, but just enough so that it can reconstruct the packet. Of course, using coding so that the packet from A to B is the original data, and the packet from B to C is some packet of same length made of parity bits, which allows a more efficient decoding at C than just transmitting the orginal packets twice between A and B and between B and C.

    [update: these notes were written in 2005. But as I put them here, I see that I just read a paper by Ed Knightly's group to appear in Infocom'07 which puts out a very similar idea.]

  • Siphon: Overload Traffic Management using Multi-Radio Virtual Sinks. Chieh-Yih Wan (Intel Research), Shane B. Eisenman (Columbia University), Andrew T. Campbell (Dartmouth College), Jon Crowcroft (Cambridge University)
    The idea is to detect congestion in the network, which is typically due to funneling effect: all the traffic fans in towards the data sink. The combat this fanning in issue, another sink, using a different radio connection towards the initial sink, can be placed in the network to redirect traffic.
    This is a relatively obvious result: you add capacity where you need it. I fear it might be a technique with limited applicability: coming up with new hardware to reduce traffic in a hot spot might make sense in a static sensor network, but not in a dynamic environment. I guess it all depends on the granularity of the time scales.

  • Estimating Clock Uncertainty for Efficient Duty-Cycling in Sensor Networks. Saurabh Ganeriwal (University of California Los Angeles), Deepak Ganesan (University of Massachusetts), Hohyun Shim, Vlassios Tsiatsis, Mani B. Srivastava (University of California Los Angeles)
    I did not go into the details of this one. Power saving technique might have application outside of the sensor network universe, but it seems cellular clock sync is working. It also seems that the dynamic aspect of the cellular environment might prove too challenging for these techniques.

  • Firefly-Inspired Sensor Network Synchronicity with Realistic Radio Effects. Geoffrey Werner-Allen, Geetika Tewari, Ankit Patel, Matt Welsh, Radhika Nagpal (Harvard University)
    This paper present a node synchronization protocol based on locking in phase with ones neighbor, a process similar to that of the dance of the fireflies. It is cute, but the real-world applicability seems far away.

  • Data Collection, Storage and Retrieval with an Underwater Optical and Acoustical Sensor Network. Iuliu Vasilescu, Keith Kotay, Daniela Rus (Massachusetts Institute of Technology), Peter Corke, Matthew Dunbabin (CSIRO Australia)
    Under-water sensor network is a challenging application, but not of much interest to me.

  • MAX: Human-Centric Search of the Physical World. Kok Kiong Yap, Vikram Srinivasan, Mehul Motani (National University of Singapore)
    This is a paper to allow localization of object using a taxonomy which is easily understandable by a human user. This might be of interest to the UI community.

  • CenWits: A Sensor-Based Loosely-Coupled Search and Rescue System using Witnesses. Jyh-How Huang, Saqib Amjad, Shivakant Mishra (University of Colorado, Boulder)
    The paper describes a search and rescue systems which functions as follows: people wear tags (802.11 based, but one assume RFID could be an alternative technology); some readers are distributed in the environment, and take a reading of the tags. When a catastrophe occurs (say, an avalanche), the tag readings are used to locate the users.
    There is a huge privacy issue here. On the other hand, the system could be integrated on a cell phone, so that the 'tag' becomes the handheld device. I guess it is practical of people of University of Colorado to come up with avalanche rescue systems. I wish I could work there, the experimentation must be fun!

  • Cyclops: In Situ Image Sensing and Interpretation in Wireless Sensor Networks. Mohammad Rahimi (Center for Embedded Networked Sensing - UCLA), Rick Baer (Agilent Technology), Obimdinachi I. Iroezi, Juan C. Garcia (Center for Embedded Networked Sensing - UCLA), Jay Warrior (Agilent Technology), Deborah Estrin, Mani Srivastava (Center for Embedded Networked Sensing - UCLA)
    This is a sensor network platform which carries a low resolution digital camera, ie a so-called image sensor. This is made in partnership with Agilent Technology.

  • Lightweight Detection and Classification for Wireless Sensor Networks in Realistic Environment. Lin Gu (University of Virginia), Dong Jia (Carnegie Mellon University), Pascal Vicaire, Ting Yan, Liqian Luo, Tian He (University of Virginia), Ajay Tirumala (University of Illinois at Urbana-Champaign), Qing Cao, John. A. Stankovic, Tarek Abdelzaher (University of Virginia), B.H. Krogh (Carnegie Mellon University)
    This paper I did not read in details.

  • Intelligent Light Control using Sensor Networks. Vipul Singhvi, Andreas Krause, Carlos Guestrin, Jim Garrett, H. Scott Matthews (Carnegie Mellon University)
    I think the title says it all: it is a sensor network which controls a lighting system. It seems very application specific.

  • Algorithms for Generic Role Assignment in Wireless Sensor Networks. Christian Frank, Kay Roemer (ETH Zurich)
    This paper (and the next two as well) I did not go into details, as it deals with software support for sensor network, a topic I am not familiar with.

  • VM*: A Scalable Runtime Environment for Sensor Networks. Joel Koshy, Raju Pandey (University of California, Davis)
    I did not go into the details of this paper, as it deals with software support for sensor network, a topic I am not familiar with.

  • Sympathy for the Sensor Network Debugger. Nithya Ramanathan, Kevin Chang, Lewis Girod, Rahul Kapur, Eddie Kohler, Deborah Estrin (UCLA)
    I did not go into the details of this paper, as it deals with software support for sensor network, a topic I am not familiar with.

Anti-Packets in Wireless Multi-Hop Network

The Anti-Packets Can Increase the Achievable Throughput of a Wireless Multi-Hop Network, by Petar Popovski and Hiroyuki Yomo, of Aalborg University, ICC'06.

The idea is very simple, and it is one of the few applications that I have seen of the field of network coding, which may actually work. Surprisingly enough, the authors don't seem to know what network coding is. At least, they don't mention it.

This paper came at about the same time as the COPE paper from Dina Katabi's group at MIT which presents the same idea with a slightly different approach (COPE is more general). I would guess that, due to the MIT reputation and the publication forum, the citation count for COPE will outnumber the number of citations for this paper by a factor 100, even though both had the same idea at about the same time.

Assume a multi-hop wireless network with 3 nodes, A, B and C. A needs to send a packet AC to C, and C a packet CA to A (both packets of same length and errorless transmissions). The current way of doing it is in 4 time slot: 1-AC from A to B; 2-CA from C to B; 3-AC from B to C; 4-CA from B to A.

The idea of the paper is to combine 3 and 4, and to use the fact that since A and C both hear B, the last step can be: 3-B transmits ACA = XOR(AC,CA) to both B and A.

A, using ACA and AC, can recover CA, and similarly C can recover AC. The gain in bandwidth is 33%.

The throughput can be doubled in an analog manner: if A and C transmit at the same time (slot 1), then B hears a signal composed of both A and C components, A+C. Assuming B hears them both with the same power, then it can just replay the signal it heared in slot 1 in slot 2. It acts only as an amplificator. A hears A+C and can substract its own component to recover the packet from C, and C similarly. Bandwidth gain is 100%.

There are a lot of implementation issues and assumptions that should be debated (it requires symmetric traffic from A to C and C to A, at least bit-wise; synchronization; error recovery mechanisms look more complicated; etc) but the main idea is quite sexy.

Cooperative Communication in Wireless Networks

Cooperative Communication in Wireless Networks, Aria Nosratinia, Todd E. Hunter, Ahmadreza Hedayat, IEEE Communications Magazine, October 2004. Authors with UT Dallas, and Nortel Networks.

The paper is a survey of the field on cooperative networks. Surprisingly enough, it is a short survey, meaning the field is still very new. It has only 11 references. This survey cites the origin of cooperative network in the paper of Cover and El-Gamal (capacity theorems for the relay channel, IEEE Trans. Info Theory, 1979).

The survey categorizes four different types of cooperation: detectand forward; amplify and forward; decode and forward; coded cooperation.
  • The first one pairs nodes in a pair, and each partner's node attempts to detect the partner's bits and then retransmits the detected bits.
  • Amplify and forward is more of an analog version of the first one: the partner receives a signal, and amplifies it. The receiver then decodes based on the signal received fromt the source and from the source's partner.
  • Decode and forward is a variation of the first one, in which the partner encodes the packet from the source differently, so that the receiver has to different encoding to reconstruct the packet from (similar to the sensys paper by Dubois-Ferriere here, where I found this reference.)
  • Coded cooperation: each user sneds different portion of each user's code word via two independent fading paths. I guess at the simplest, it is the decode-and-forward above.


Performance is strongly improved by cooperation in terms of block error rate.

Paper is low level technical reading (no complex analysis or concepts) and clearly presented. It'd be interesting to know what applications --if any-- Nortel has in mind for this work.

Characterisation of the Performance of Cooperative Networks in Ricean Fading Channels

J. Adeane, M. R. D. Rodrigues and I. J. Wassell, Characterisation of the performance of cooperative networks in Ricean fading channels, Proceedings of the International Conference on Telecommunications, Cape Town, South Africa, May 2005.

Another paper on my list of cooperative network channels. This ones looks at the performance in Ricean fading channels, and consider a simple network topology with two cooperating nodes trying to send data to a single destination, ie. the source-relay-destination triad.

Benefits of cooperation depends on the channel quality between the source and destination and between the source and relay. The paper actually shows a spot in which there is a degradation of the performance created by the cooperation (ie. the information provided by the relay to the destination does not improve that provided by the source, but even interferes with it or contradicts it).

Paper is relatively simple and results are relatively straight-forward, but a good read nonetheless.

Two papers on ad hoc routing

These two papers were suggested by a reviewer of a paper of mine regarding opportunistic routing. They associated these papers with my results, which means I should have differentiated the results better, as it is pretty obvious to me they have little in common.

Anyhow, I have now read the papers:
AODV-BR: Backup Routing in Ad Hoc Networks, Sung-Ju Lee and Mario Gerla, in Proceedings of IEEE WCNC 2000.

On-Demand Multipath Distance Vector Routing in Ad Hoc Networks, Mahesh Marina, Samir Das, in Proceedings of IEEE ICNP, November 2001.

AODV-BR: Backup Routing in Ad Hoc Networks, Sung-Ju Lee and Mario Gerla, in Proceedings of IEEE WCNC 2000.

This paper introduces an extension of AODV which allows for graceful recovery when a link breaks. It uses the query-reply mechanism to build additional back-up routes at nodes adjacent to the route in the path.

Once a link on the route breaks, it can be temporarily recovered using the alternate path, while a route error is sent to the source and a new route discovery is triggered.

The idea is simple and practical, and works fine. It does bring improvement over AODV. The key difference with opportunistic routing is that the alternate is used after a broken link, and is not used otherwise. This means that if the route is reliable, but better links appear later on, these are ignored.

On-Demand Multipath Distance Vector Routing in Ad Hoc Networks, Mahesh Marina, Samir Das, in Proceedings of IEEE ICNP, November 2001.

This is similar to the previous paper, except that it computes link disjoint paths between source and destination. Same as above, the query-reply mechanism of the route discovery is used to set up the multiple link disjoint paths (as opposed to just associated nodes along the path as in the previous paper). In case of link failure, the traffic fails over to another path.

Again, it obviously outperforms AODV. Again, it does not make use of the promiscuous nature of the air interface to forward packets further.

Opportunistic Multi-Hop Routing for Wireless Networks

This are some old notes, but I am consolidating here.

Opportunistic Multi-Hop Routing for Wireless Networks, Sanjit Biswas, Robert Morris, SIGCOMM 2005. Best Paper Award.

This is a continuation upon the work of Biswas and Morris, Opportunistic Routing in Multi-Hop Wireless Networks in HotNets-II, 2003. The previous work is the one which triggered my own work on opportunistic routing, which is basically to adapt Biswas and Morris's to dynamic network using an adapted version of AODV.

The paper presents an extension of the opportunistic routing from the previous paper. However, it seems that there were issues in applying the protocol to the RoofNet network, a mesh network based on 802.11 APs deployed by Morris's group in Cambridge, MA, as the protocol presented here is vastly different from the previous protocol (given also that this came out in 2005, while the previous paper was presented in 12/2003).

One issue is that: all packets end up not being accounted for, and the new paper proposes the opportunistic routing on a batch of paper, and default back to traditional routing once a 90% threshold has be attained for delivery. This is a bit cumbersome, and especially means that no real-life live traffic can be used using this. Heavy file transfers only.

Also, the MAC protocol from the earlier paper (with a RTS/CTS mechanism designed for opportunistic routing) is replaced by a different scheduled MAC, with no specific RTS/CTS. A ExOR header is added which contains information pertaining to the potential relays. This information is based on measurement and probes performed beforehand.

The node which forwards the packet is the closest to destination using an ETX (expected retransmission) metric, ie. a path is chosen not on the number of hops but on how many times the packet would be re-tx-ed to get to the destination. The fewer the re-tx, the closer.

Main issues/challenges regarding my own work: this is highly static. ETX requires to have a good knowledge of the routes a priori, which works if stuff does not move. Then sending a large batch and wait until 90% is transmitted implies that you have massive re-ordering. And some time on your hand, which again means static.

Concern: are the changes to the original protocol (hotnet-II 2003) made because the MAC layer was not accessible in their off-the-shelf 802.11 testbed, or was it made because of performance. The changes move everything up at the network level, so implementation could have driven it.

Performance evaluation of my own protocols will tell...

Opportunistic Packet Scheduling and Media Access Control for Wireless LANs and Multi-Hop Ad Hoc Networks.

Opportunistic Packet Scheduling and Media Access Control for Wireless LANs and Multi-Hop Ad Hoc Networks. J. Wang, H. Zhai and Y. Fang, WCNC 2004.

This paper proposes to perform a scheduling similar to proportional fairness in multi-hop networks, by sending a CTS to a list of potential relays (those whose packets are waiting for transmission in the output buffer of the node), receiving the RTS with an indication of current signal strength, and picking the "best" node, according to a metric similar to that of proportional fair (based on the achieved throughput to each node up to this time).

The idea is straight-forward, but quite good actually (by which I mean: this should be a better-than-WCNC paper). It should be obvious that, as soon as several packets are awaiting transmission at any node, then the node has the choice to pick which one to transmit based on instantaneous conditions.

Performance gains shown, obviously. Little analysis, simulation only.

The Critical Transmitting Range for Connectivity in Sparce Wireless Ad Hoc Networks

This paper: The Critical Transmitting Range for Connectivity in Sparce Wireless Ad Hoc Networks P. Santi, D. Blough, IEEE Trans on Mobile Computing, Vol.2,No.1,Jan-Mar 2003, is excellent.

It computes the asymptotic relation between the range r, the length l and the number n of nodes uniformly (randomly) distributed in a square area of side length l with connectivity range r. It establish the value for r for which the graph is connected with high probability, or disconnected with high probability.

The main result is: in one dimension, rn > 2l log(l) ensures connectivity, rn < (1-epsilon) l log(l) ensures disconnectivity.

In higher dimension (2 and 3), connectivity is assured for r^d n > k l^d log(l), with k a constant which depends on the dimension. If r^d n < l^2, then the graph is disconnected.

There is a gap in dimension 2 and 3 wrt the asymptotes. Also, paper conjectures that in dim 1, rn < 2l log(l) should ensure disconnectedness.

Most of the proofs come from a pigeon-hole argument, which is relatively simple and elegant.

Monday, February 19, 2007

A High Throughput Path Metric for Multi-Hop Wireless Routing.

Douglas DeCouto, Daniel Aguayo, John Bicket, Robert Morris, Wireless Networks 11, 419-34, 2005.

I had read a version of this paper a little while ago, but some reviewing duty sent me back to it to confirm a thing or two. It introduces the ETX measure, which is an additive measure to find the path between two nodes on a wireless mesh network which minimizes the number of retransmission.

That it is additive is a good thing, and it does not have the issues of other metric: shortest hop count might find long links with bad quality and the actual performance of shortest path is demonstrated in the paper to be quite poor. Using a product of delivery ratio (which is additive too if you take the logs) or the minimal PDR (again, additive in a (max,+) algebra) will favor multi-hop links with good PDR over a single link with lesser PDR, but the relaying interference of the multi-hop path will divide the bandwidth of the path by two, so it is not a good metric either.

ETX computes the delivery rate of a link in each direction and is computed as the inverse of the product of these rates. The higher the ETX count, the more the expected the number of re-transmission, so the routing should choose the path with the lowest cumulative ETX. Performance is actually improved, especially as path are longer.

What is nice about the paper: it is very nicely explained. It is thinner in technical content than I remembered (basically it is the idea of the ETX metrics which matters) but each choice is very nicely argued and motivated, and each decision made very clear. So it is a very nice read.

The main thing I am now taking from the paper is that the performance of the metric is the invert of the packet delivery in both directions. However, one direction is the ACK direction, and this can be improved by sending mutliple ACKs, or by adding enough redundancy in the ACK to decode it successfully every time. Then your ETX drop. For instance, if you have 1/2 pdr in each direction, your ETX is 4. If you add enough coding redundancy so as to decode the Ack succesfully every time, then your ETX drops to 2. That's a pretty major gain. It is obvious, but the paper does not suggest doing it.

Object Replication Strategies in Content Distribution Networks

Jussi Kangasharju, James Roberts, Keith Ross, Computer Communication Journal, 25 (2002) 376-83.

This papers looks at the Autonomous System level distribution of content (such as Akamai content distribution network, CDN, for instance) and how to replicate the content so that, constrained on a limited memory space, the distance from the end user to the looked-for data object is minimized.

The paper shows that this is a variant of the knapsack problem and thus is NP-Complete and cannot be solved. But they provide some heuristics which show that a network wide collaboration (that is, a global management of the CDN) best all the other heuristic strategies (random, popularity-based, greedy single and greedy global). Greedy single is the strategy of replicating the content most favored by the client of each AS. Greedy global is the strategy of finding over all AS the object which replication would decrease the cost the most, and iterate until all storage space is filled.

The paper is interesting, especially since the NP-completeness of the CDN problem they consider basically includes the NP-completeness of the problem I am considering, and thus relieves me from theoretical work: all I have to do is to evaluate the proper heuristics. This also makes this paper a very easy read, as there is no equation, only the comparison of the performance of the different strategies.

Sunday, February 18, 2007

Little Tom Thumb Went Straight Home.

C. Westphal, Little Tom Thumb Went Straight Home, in Proc. of Infocom 07, Anchorage, Alaska.

Content and Service Replication Strategies in Multi-hop Wireless Mesh Networks.

by Shudong Jin, Limin Wang. MSWiM'05, Oct. 10-13, Montreal, Quebec. Paper here.

The paper studies the impact of the replication of content in a wireless mesh networks. They show that in a 2-d mesh network, the optimal number of replicas of an object is $p^(2/3)$, where $p$ is the access probability of the object.

The result is pretty, as it is a bit counter-intuitive (a replication rate proportional to the access probability is what would seem natural), and the derivation is actually rather simple and straightforward.

The paper also opens research questions into: how to account for the wireless physical layer. In the paper, it is abstracted as a way to define the connectivity, and thus the topology, of the network, but the actual impact of the wireless interface on the placement of the replicated content could have some other implications. An interesting idea to look into.