Since the last post I have explored some improvements to PAPR, and tested the 1600/2000 modes introduced in the last post in real time. These tests have given me a little more insight into the problems with HF channels and led me to better understand the requirements. This has lead to a new 1600 bit/s FreeDV mode specifically designed to handle these requirements.
Peak/Average Power Ratio (PAPR) Improvements
The FreeDV FDMDV modem waveforms have a PAPR of around 12dB. That means the peaks of the waveform are 12dB higher than the average. So the average power of the signal is limited to 12dB less than the peak power of the amplifier.
Now the average transmit power sets the Bit Error Rate (BER) of the received signal. So if we can reduce PAPR, we can raise the average power without clipping the amplifier, and improve our BER. Peter Martinez, G3PLX, suggested that some hard clipping of the modem waveform might reduce PAPR without adversely affecting performance. Here are the results from clipping, obtained using the fdmdv_ut simulation in an AWGN channel:
Test | Eb/No | SNR | PAPR | Clip | BER |
---|---|---|---|---|---|
(a) | 6.3 | 3.0 | 12.6 | 1.0 | 0.0134 |
(b) | 6.3 | 3.0 | 7.74 | 0.7 | 0.0175 |
(c) | 9.3 | 6.0 | 7.71 | 0.7 | 0.0024 |
(d) | 11.3 | 8.0 | 7.74 | 0.7 | 0.0 |
Test (a) is the baseline unclipped modem waveform with a BER of 0.0134. If we clip the waveform to 0.7 of the peaks (b) in the same channel we get only a slight increase in BER however the PAPR has reduced by 5dB. This is very significant, as it potentially allows us to increase the transmit power, for example by 3dB (c) or even up to 5dB (d), with significant reductions in the BER.
This got me thinking about what happens in a SSB radio power amplifier if we drive it into compression, a somewhat softer form of reducing the peak level than hard clipping. So we tested various power levels on an IC7000 owned by Mark, VK5QI. A nearby receiver and FreeDV was used to monitor the SNR of the received signal. In this case the SNR (as measured by FreeDV) represents distortion due to compression, the tx and rx were so close that there was no significant channel noise affecting SNR.
Test | Av Tx Power | SNR | BER |
---|---|---|---|
(a) | 8 | 18 | 0 |
(b) | 25 | 10.5 | 0 |
We found that at 25W average power the radio became quite hot. A higher average power would not be practical. Now FreeDV users typically drive their tx at the 10-20W average level, this is a backoff from the peak 100W power of 10-7dB. This is similar to the 7dB PAPR obtained from the hard clipping experiments above. This is well into compression, but as we can see above the SNR is still quite high, so the distortion due to this much compression won’t affect the BER much.
So despite the PAPR reduction we found by experimenting with hard clipping above, it is not possible to get any further power benefits from PAPR reduction – we are already running the typical SSB power amplifier near it’s safe limits.
Codecs for HF DV
I spent some time watching the 1600 and 2000 bit/s modes introduced in the last post in action. I noticed they were still falling over on typical HF fading channels, especially in the 0-5dB SNR range. After some thought, I came up with some design ideas for HF DV modes:
- Intelligible speech at around 10% raw BER for QPSK (averaging all carriers over over a few seconds).
- For a FEC code to work with a raw bit rate of 10% BER we require a low code rate (e.g. 0.3), which means lots of parity bits (a high bit rate), and large block sizes.
- But we are constrained by latency to short blocks, and the code rate is constrained by bit rate (e.g. its hard to get more than 2000 bit/s thru this channel).
- So it is difficult to protect all bits in the Codec with FEC.
My previous tests show the excitation bits (pitch, voicing, energy) are the most sensitive. The excitation bits affect the entire spectrum, unlike LSPs where a bit error introduces distortion to a localised part of the spectrum.
So I dreamt up a new 1300 bit/s Codec 2 mode that has “less” sensitive bits. The 1300 bit/s Codec 2 mode only sends (scalar) pitch and energy once every 40ms, rather than twice for the previous 1600 Codec 2 bit/s mode. This reduces the 0 BER quality a little, but now there is “less to go wrong” (just 16 bits for the excitation) at high BER. Less excitation bits means they can be protected with just a few extra FEC bits. So I added a single Golay FEC word to protect 12 of the 16 excitation bits to get a total bit rate of 1600 bit/s over the channel. This is known as the new 1600 bit/s mode.
This table shows the difference between the 1300 and previous 1600 bit/s Codec 2 modes, you might be able to hear a small difference:
Sample |
---|
hts1a 1300 bit/s |
hts1a 1600 bit/s |
ve9qrp 1300 bit/s |
ve9qrp 1600 bit/s |
This table has some samples of the 1300 bit/s Codec 2 + 300 bit/s FEC (1600 bit/s FreeDV mode) over several simulated and real world channels, as shown in the table below:
Sample |
---|
FreeDV V0.91 1400 bit/s CCIR poor channel 4dB |
1600 bit/s CCIR poor channel 4dB |
1600 bit/s VK2MEV in Newcastle to Adelaide 20m |
1600 bit/s K5WH to K0PFX with interfering SSB |
The signal sampled from VK2MEV had a reasonably high SNR (above 5dB) but a high average BER due to the constant fading:
Note the number of frames in the 10 to 15% error range, and the near constant fading on at least one carrier. As the fading is so regular, the SNR is fairly steady. The last plot is the timing offset, which is slowly drifting downwards indicating a sample clock difference between the tx and rx sound cards.
The K5WH to K0PFX sample is an example of a SSB signal interfering with FreeDV:
You can hear the SSB and modem signals together in this sample of the off air signal. You can hear the FreeDV modem tones start up about 10 seconds in. The decoded speech (in the table above) holds up pretty well.
Command Line
octave:1> fdmdv_demod("/home/david/n4dvr.wav",1600*30,16,"mod_test_1600_n4dvr_001.err")
45952 bits 1321 errors BER: 0.0287 PAPR(rx): 22.53 dB
david@bear:~/codec2-dev/src$ ./c2enc 1300 ../raw/ve9qrp.raw - | ./fec_enc - - 1600 | ./insert_errors - - ../octave/mod_test_1600_n4dvr_001.err 64 | ./fec_dec - - 1600 | ./c2dec 1300 - - | play -t raw -r 8000 -s -2 -
1600 seems to be better than the 1400. Thanks for all your efforts!
quite an improvement at 1600.Thank you both for all your efforts!
Regarding PAPR reduction, I suspect we have two potential uses:
1. Making up for the operator who doesn’t know what level to set. Our email reports from beginners indicate that they run power much too high. For this reason I suggest leaving PAPR reduction enabled and providing a “amplifier provides 12 dB headroom” checkbox that can be set once they understand the power levels better.
2. Radios that are qualified to run FSK at reasonably high power today. Mine just turns both fans on high when it gets hot, and the S/N in the shack gets a bit worse 🙂
People will run their amplifiers to thermal cutoff (or worse, thermal runaway) with or without our assistance. We can provide them with some warnings and hope they heed them, but that’s really the best we can do.
Hi,
HP developed a measurement called EVM, error vector magnitude. I am not sure of the math involved, but I think it would be good to have the receiver compute and display it so the transmitter can be monitored and adjusted. And also the receiver path can also be optimized.
John
Drive-by comment: Why are you usingGolay code? It’s optimal _if_ you don’t know which bits are corrupted. Judging from your pictures, this is certainly false: you know what bits are reliable and which are not. You may want to use some codes which can use information about which bits are unreliable (e.g. turbo codes). Rolling your own codes would be best of course. 🙂