About Robust Beamforming
In the paper you mentioned briefly to the Robust Beamforming and I would like to know if you could share the code you used for this kind of inference task
You can find the script for this task here. Below is the overall process:
Clone the Model and Datasets: Clone the LWM model and, if needed, the default datasets as instructed here.
Generate Ray-Traced Channels: Choose a scenario and generate the corresponding ray-traced DeepMIMO channels.
Compute Optimal Beamforming Vectors: Compute the optimal beamforming vectors for the channels. At this stage, you have the channels as the input (source) and the beamforming vectors as the output (target). Since real-world channels are often noisy or imperfect, it is essential to have a robust approach to map noisy channels to their optimal beamforming vectors. Given that LWM is pre-trained on masked and noisy datasets, it is highly robust to noise, making it an excellent choice for this task in practice.
Generate LWM Embeddings: Use LWM to generate embeddings for the original raw channels. Ensure that the
'gen_raw'
argument of thetokenizer
function is set toFalse
for this task. This applies masking to the input channels, and the resulting embeddings represent the noisy channels. The finaldataset
variable will contain these embeddings.Train and Test Your Downstream Model: Use these embeddings as inputs to train and test your downstream model, enabling it to predict the previously computed beamforming vectors.
Feel free to reach out if you have any questions or need further assistance!
Thank you for your answer.
When you compare with the raw channels, as described in the paper, I see two possible options for how this was implemented:
- You use two different downstream models:
- M1: The channels as input (32x32 complex numbers = 1024 complex numbers) and beamforming vectors as output (32x1 complex numbers);
- M2: LWM embeddings as input (128x64 real numbers = 8192 real numbers) and beamforming vectors as output (32x1 complex numbers)
- You use the same downstream model (i.e. M1), which has channels as input (32x32 complex numbers = 1024 complex numbers) and beamforming vectors as output (32x1 complex numbers). The LWM model is used to denoise the channels that is used as input to the downstream model M1.
Could you clarify which of these approaches was followed?
From my understanding I suppose you used to approach 1 but, in this case, the number of parameters between M1 and M2 are different.
Did you use the Fully Connected Neural Network treating the real and imaginary part of the complex number as two separated inputs?
We followed the first approach. To ensure a fair comparison, we used the same model architecture for both cases but adjusted the number of layers and neurons so that both models have a comparable parameter count. Fully-connected networks were used as the downstream model architecture.
For feeding complex raw channels (imperfect raw channels in this task) to the downstream model, you have two options:
- Split the complex raw channels into real and imaginary components, then concatenate them as separate inputs to the model.
- Set the following in your code:
input_types = ['cls_emb', 'channel_emb', 'raw']
selected_input_type = input_types[2]
This ensures that LWM inference is bypassed, and the real-valued raw channels are directly outputted as the dataset
.
Please let us know if you have further questions or need additional clarification!