Sadjad Alikhani commited on
Commit
4ba6cbe
•
1 Parent(s): fdf423e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -37
README.md CHANGED
@@ -1,5 +1,8 @@
 
1
  # 📡 **LWM: Large Wireless Model**
2
 
 
 
3
  Welcome to the **LWM** (Large Wireless Model) repository! This project hosts a pre-trained model designed to process and extract features from wireless communication datasets, specifically the **DeepMIMO** dataset. Follow the instructions below to clone the repository, load the data, and perform inference with LWM.
4
 
5
  ---
@@ -63,34 +66,9 @@ print("All required functions imported successfully.")
63
 
64
  ---
65
 
66
- ### 2. **Load the LWM Model**
67
-
68
- Once the repository is cloned, load the pre-trained **LWM** model using the following code:
69
-
70
- ```python
71
- # Step 5: Load the LWM model (with flexibility for the device)
72
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
73
- print(f"Loading the LWM model on {device}...")
74
- model = LWM.from_pretrained(device=device)
75
- ```
76
-
77
- ---
78
-
79
- ### 3. **Load the DeepMIMO Dataset**
80
-
81
- Load the DeepMIMO dataset using the pre-defined loading function:
82
-
83
- ```python
84
- # Step 6: Load dataset (direct call, no module prefix)
85
- print("Loading DeepMIMO dataset...")
86
- deepmimo_data = load_DeepMIMO_data()
87
- ```
88
-
89
- ---
90
-
91
- ### 4. **Tokenize the DeepMIMO Dataset**
92
 
93
- Tokenize the dataset based on specific scenarios from DeepMIMO. Below is a list of available scenarios and their links for more information:
94
 
95
  | **Scenario** | **City** | **Link to DeepMIMO Page** |
96
  |---------------|---------------|----------------------------------------------------------------------------------------------------------------|
@@ -107,26 +85,45 @@ Tokenize the dataset based on specific scenarios from DeepMIMO. Below is a list
107
  - **Subcarriers**: 32
108
  - **Paths**: 20
109
 
110
- #### **Tokenization Code**:
111
- Select and tokenize specific scenarios by adjusting the `scenario_idxs`. In the example below, we select the first two scenarios.
112
 
113
  ```python
114
- # Step 7: Tokenize the dataset
115
- scenario_idxs = torch.arange(2) # Adjust the number of scenarios you want
116
- print("Tokenizing the dataset...")
 
 
 
 
 
 
 
117
  preprocessed_chs = tokenizer(deepmimo_data, scenario_idxs, gen_raw=True)
 
118
  ```
119
 
120
- - The dataset will be tokenized according to the selected scenarios and preprocessing configurations.
 
 
 
 
 
 
 
 
 
 
 
121
 
122
  ---
123
 
124
- ### 5. **LWM Inference**
125
 
126
- Once the dataset is tokenized, generate either **raw channels** or the **inferred LWM embeddings** by choosing the input type.
127
 
128
  ```python
129
- # Step 8: Generate the dataset for inference
130
  input_type = ['cls_emb', 'channel_emb', 'raw'][1] # Modify input type as needed
131
  dataset = dataset_gen(preprocessed_chs, input_type, model)
132
  ```
@@ -145,7 +142,7 @@ You can choose between:
145
  Finally, use the generated dataset for your downstream tasks, such as classification, prediction, or analysis.
146
 
147
  ```python
148
- # Step 9: Print results
149
  print(f"Dataset generated with shape: {dataset.shape}")
150
  print("Inference completed successfully.")
151
  ```
@@ -156,3 +153,4 @@ print("Inference completed successfully.")
156
 
157
  - **Python 3.x**
158
  - **PyTorch**
 
 
1
+ ```markdown
2
  # 📡 **LWM: Large Wireless Model**
3
 
4
+ **[🚀 Click here to try the Interactive Demo!](https://huggingface.co/spaces/your-space-name)**
5
+
6
  Welcome to the **LWM** (Large Wireless Model) repository! This project hosts a pre-trained model designed to process and extract features from wireless communication datasets, specifically the **DeepMIMO** dataset. Follow the instructions below to clone the repository, load the data, and perform inference with LWM.
7
 
8
  ---
 
66
 
67
  ---
68
 
69
+ ### 2. **Load and Tokenize the DeepMIMO Dataset**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
+ Before loading the LWM model, you need to load the DeepMIMO dataset and select specific scenarios for tokenization. Below is a list of available scenarios and their links for more information:
72
 
73
  | **Scenario** | **City** | **Link to DeepMIMO Page** |
74
  |---------------|---------------|----------------------------------------------------------------------------------------------------------------|
 
85
  - **Subcarriers**: 32
86
  - **Paths**: 20
87
 
88
+ #### **Load and Tokenize Code**:
89
+ Select and load specific scenarios by adjusting the `scenario_idxs`. In the example below, we select the first two scenarios and tokenize the data.
90
 
91
  ```python
92
+ # Step 5: Load dataset and select specific scenarios
93
+ print("Loading and tokenizing DeepMIMO dataset...")
94
+
95
+ # Load the DeepMIMO dataset
96
+ deepmimo_data = load_DeepMIMO_data()
97
+
98
+ # Select scenarios to tokenize
99
+ scenario_idxs = torch.arange(2) # Adjust the number of scenarios as needed
100
+
101
+ # Tokenize the dataset
102
  preprocessed_chs = tokenizer(deepmimo_data, scenario_idxs, gen_raw=True)
103
+ print("Dataset tokenized successfully.")
104
  ```
105
 
106
+ ---
107
+
108
+ ### 3. **Load the LWM Model**
109
+
110
+ After loading and tokenizing the DeepMIMO dataset, load the pre-trained **LWM** model using the following code:
111
+
112
+ ```python
113
+ # Step 6: Load the LWM model (with flexibility for the device)
114
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
115
+ print(f"Loading the LWM model on {device}...")
116
+ model = LWM.from_pretrained(device=device)
117
+ ```
118
 
119
  ---
120
 
121
+ ### 4. **LWM Inference**
122
 
123
+ Once the dataset is tokenized and the model is loaded, generate either **raw channels** or the **inferred LWM embeddings** by choosing the input type.
124
 
125
  ```python
126
+ # Step 7: Generate the dataset for inference
127
  input_type = ['cls_emb', 'channel_emb', 'raw'][1] # Modify input type as needed
128
  dataset = dataset_gen(preprocessed_chs, input_type, model)
129
  ```
 
142
  Finally, use the generated dataset for your downstream tasks, such as classification, prediction, or analysis.
143
 
144
  ```python
145
+ # Step 8: Print results
146
  print(f"Dataset generated with shape: {dataset.shape}")
147
  print("Inference completed successfully.")
148
  ```
 
153
 
154
  - **Python 3.x**
155
  - **PyTorch**
156
+ - **Git**