wi-lab commited on
Commit
21b55b0
·
verified ·
1 Parent(s): 12e057f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -19,6 +19,16 @@ datasets:
19
 
20
  LWM is a powerful **pre-trained** model developed as a **universal feature extractor** for wireless channels. As the world's first foundation model crafted for this domain, LWM leverages transformer architectures to extract refined representations from simulated datasets, such as DeepMIMO and Sionna, and real-world wireless data.
21
 
 
 
 
 
 
 
 
 
 
 
22
  ### How is LWM built?
23
 
24
  The LWM model’s structure is based on transformers, allowing it to capture both **fine-grained and global dependencies** within channel data. Unlike traditional models that are limited to specific tasks, LWM employs a **self-supervised** approach through our proposed technique, Masked Channel Modeling (MCM). This method trains the model on unlabeled data by predicting masked channel segments, enabling it to learn intricate relationships between antennas and subcarriers. Utilizing **bidirectional attention**, LWM interprets the full context by attending to both preceding and succeeding channel segments, resulting in embeddings that encode comprehensive spatial information, making them applicable to a variety of scenarios.
 
19
 
20
  LWM is a powerful **pre-trained** model developed as a **universal feature extractor** for wireless channels. As the world's first foundation model crafted for this domain, LWM leverages transformer architectures to extract refined representations from simulated datasets, such as DeepMIMO and Sionna, and real-world wireless data.
21
 
22
+ <!--
23
+ ### 🎥 Watch the tutorial
24
+
25
+ Check out this tutorial video to see the model in action! Click on the thumbnail below to watch it on YouTube.
26
+
27
+ [![Watch the tutorial](https://img.youtube.com/vi/YOUTUBE_VIDEO_ID/0.jpg)](https://www.youtube.com/watch?v=YOUTUBE_VIDEO_ID)
28
+
29
+ *In this video, we walk through the LWM paper, explain how the model works, and demonstrate its application for downstream tasks with practical examples. You'll find step-by-step instructions and detailed insights into the model's output.*
30
+ -->
31
+
32
  ### How is LWM built?
33
 
34
  The LWM model’s structure is based on transformers, allowing it to capture both **fine-grained and global dependencies** within channel data. Unlike traditional models that are limited to specific tasks, LWM employs a **self-supervised** approach through our proposed technique, Masked Channel Modeling (MCM). This method trains the model on unlabeled data by predicting masked channel segments, enabling it to learn intricate relationships between antennas and subcarriers. Utilizing **bidirectional attention**, LWM interprets the full context by attending to both preceding and succeeding channel segments, resulting in embeddings that encode comprehensive spatial information, making them applicable to a variety of scenarios.