Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RADE V2 prototyping #42

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

RADE V2 prototyping #42

wants to merge 9 commits into from

Conversation

drowe67
Copy link
Owner

@drowe67 drowe67 commented Jan 17, 2025

Exploring ideas to improve 99% power bandwidth (spectral mask) from RADE V1. Just prototyping with "mixed rate" training and inference, i.e. no pilots or CP, genie phase.

  • Worked out how to put a BPF in training loop (conv1d with training disabled)
  • Take away that phase only (PAPR 0dB) works quite well
  • clip-BPF x 3 produces reasonable 99% power BW, 0dB PAPR, good loss
  • Training for 200 Epochs reduced loss from 0.126 to 0.115, this is worth doing for any network

Training:

python3 train.py --cuda-visible-devices 0 --sequence-length 400 --batch-size 512 --epochs 200 --lr 0.003 --lr-decay-factor 0.0001 ~/Downloads/tts_speech_16k_speexdsp.f32 250117_test --bottleneck 3 --h_file h_nc20_train_mpp.f32 --range_EbNo --plot_loss --auxdata --txbpf
Epoch 200 Loss 0.116

Testing:

./inference.sh 250117_test/checkpoints/checkpoint_epoch_200.pth wav/brian_g8sez.wav - --bottleneck 3 --auxdata --write_tx tx_bpf.f32 --write_latent z.f32 --txbpf
          Eb/No   C/No     SNR3k  Rb'    Eq     PAPR
Target..: 100.00  133.01   98.24  2000
Measured: 102.89          101.12       1243.47  0.00
loss: 0.121 BER: 0.000

octave:154> radae_plots; do_plots('z.f32','tx_bpf.f32')
bandwidth (Hz): 1255.813953 power/total_power: 0.990037

Red lines mark 99% power bandwidth:

Screenshot from 2025-01-17 16-08-01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant