Signal Space Diversity

Although it’s been around for more than a decade, I only recently learned of an ingenious technique to combat fading in wireless channels. While researching a seemingly unrelated topic, I kept coming across this citation…


J. Boutros and E. Viterbo, “Signal space diversity: a power and bandwidth efficient diversity technique for the fading channel,” IEEE Trans. Info. Theory, vol. 44, no. 4, pp. 1453–1467, July 1998.

…so I downloaded a copy and decided to give it the once over twice.

The paper begins by restating the common problem posed by a multipath wireless channel – deep fades result in lost bits or constellation symbols. Therefore you usually have to employ something like an error correction code to fill in the missing puzzle pieces at the receiver side. And of course having to send the extra parity in a power and bandwidth constrained system results in a lower throughput.

But on page 2 of this paper, the authors make a promise that seems too good to be true. Did they just state that by simply rotating the normal QAM constellations you can significantly improve performance in a fading environment with no reduction in throughput?! They start with a simple example for QPSK (4-QAM).

Normal vs. Rotated QAM Constellations

In the above plot you see the normal QPSK constellation points (red) in the I-Q plane, and you also see a rotated version (blue). Now assume for the sake of argument that a constellation point you are transmitting through the channel experiences a deep fade on only the I or the Q axis, but not both.

The normal 4-QAM constellation maps 2 bits independently. One bit determines the amplitude (or polarity) on the I axis and vice versa for the other bit on Q. Therefore if a fade deeply attenuates the amplitude of one of these axes, our four noise-free constellation coordinates begin to collapse into two. One bit will end up having very little protection against noise and may easily be demodulated incorrectly. By contrast, the rotated constellations modulate each bit onto both I and Q. So if you lose one axis, you don’t lose all information about one of the bits.

This all very well and good, but we know that on real wireless channels, a deep attenuation is probably going to affect I and Q in a highly correlated way. We wouldn’t expect to often see a fade on one axis but not the other. Unless…

Unless we put an I/Q interleaver into our system, such that after de-interleaving on the RX side, the I and Q come from different points in time and/or different carrier frequencies. The process is illustrated below with α’s representing channel attenuations or fades, I and Q representing our rotated symbol coordinates, and the subscripts representing time (or frequency).


Component Interleaver

With the I/Q correlation broken, we may very well see a fade on only I or Q.

Just to prove to myself that I was getting the point, I fired up Matlab and coded up this simple example. Ignoring the maximum likelihood demodulator presented in the paper, at the receive side after de-interleaving, I simply applied the opposite phase rotation to the incoming symbols to get them back to the symmetrical, normal 4-QAM, then declared a 0 or a 1 based upon whether the constellation was above or below (left or right) the Q (I) axis. I ended up with a curve that wasn’t any better than normal 4-QAM. In fact, it had a sort of noise floor at high SNR.

First Attempt

Note: “RawBER” on the y-axis refers to uncoded bit error rate

What was I missing? The best I could tell by examining the rotated vs. non-rotated constellations was that the combination of rotating and interleaving, when reversed on the RX side, resulted in a “star” constellation shape versus the usual round “cloud”. No problem there, except the stars are “heavy-tailed”, resulting in occasional error events even at higher SNRs.

const_both

So I decided to take the authors’ word for it that maximum likelihood detection was the way to go. This means you just compute the squared Euclidean distance between each de-interleaved RX constellation and all possible TX symbols, scaled by the channel attenuation (assumed known). You have to take care to code this up correctly, because if your deinterleaved RX symbol was attenuated by α1 on the I axis and α2 on the Q axis, you have to multiply the I and Q of the TX constellations by those same, separate α’s when you compute Euclidean distance.

So assuming your RX complex constellation is denoted by r, you’re picking the rotated TX complex constellation point denoted by x, that minimizes:

minimization_metric

And that proved to do the trick as you can see below. For reference I’ve included the AWGN curve on the plot as well which represents additional performance gain available as your diversity order approaches infinity. (The diversity order of this scheme is 2).

AWGN, Signal Space Diversity, Rayleigh Fading Channel

And that takes us through only about page 5 of their 34-page paper. Remaining pages describe how to implement higher order dimensions so that you can continue moving your performance curve to the left with no effect on throughput (but increased RX-side complexity).

If you’re wondering how that’s done given that we seem to only have 2 dimensions to work with (I & Q) I believe it involves a sort of CDMA spreading where for example 5 linear combinations of 5 constellation points are transmitted over 5 TX symbols. Thus a deep fade on any one of these “precoded” TX symbol wipes out only 1/5th the energy of all 5 symbols, rather than all the energy of 1 symbol.

Hope to get time one day to explore this as well.

Would you like a copy of my Matlab code I used for this study?