Sadjad Alikhani
commited on
Commit
•
476b019
1
Parent(s):
26fea1c
Update README.md
Browse files
README.md
CHANGED
@@ -65,12 +65,12 @@ print("All required functions imported successfully.")
|
|
65 |
|
66 |
---
|
67 |
|
68 |
-
### 2. **Load
|
69 |
|
70 |
-
Before
|
71 |
|
72 |
-
| **
|
73 |
-
|
74 |
| Dataset 0 | Denver | [DeepMIMO City Scenario 18](https://www.deepmimo.net/scenarios/deepmimo-city-scenario18/) |
|
75 |
| Dataset 1 | Indianapolis | [DeepMIMO City Scenario 15](https://www.deepmimo.net/scenarios/deepmimo-city-scenario15/) |
|
76 |
| Dataset 2 | Oklahoma | [DeepMIMO City Scenario 19](https://www.deepmimo.net/scenarios/deepmimo-city-scenario19/) |
|
@@ -84,32 +84,46 @@ Before loading the LWM model, you need to load the DeepMIMO dataset and select s
|
|
84 |
- **Subcarriers**: 32
|
85 |
- **Paths**: 20
|
86 |
|
87 |
-
#### **Load
|
88 |
-
Select and load specific
|
89 |
|
90 |
```python
|
91 |
-
# Step 5: Load
|
92 |
-
print("Loading
|
93 |
|
94 |
# Load the DeepMIMO dataset
|
95 |
deepmimo_data = load_DeepMIMO_data()
|
96 |
|
97 |
-
# Select
|
98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
|
100 |
-
# Tokenize the
|
101 |
-
preprocessed_chs = tokenizer(deepmimo_data,
|
102 |
print("Dataset tokenized successfully.")
|
103 |
```
|
104 |
|
105 |
---
|
106 |
|
107 |
-
###
|
108 |
|
109 |
-
|
110 |
|
111 |
```python
|
112 |
-
# Step
|
113 |
device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
114 |
print(f"Loading the LWM model on {device}...")
|
115 |
model = LWM.from_pretrained(device=device)
|
@@ -117,12 +131,12 @@ model = LWM.from_pretrained(device=device)
|
|
117 |
|
118 |
---
|
119 |
|
120 |
-
###
|
121 |
|
122 |
Once the dataset is tokenized and the model is loaded, generate either **raw channels** or the **inferred LWM embeddings** by choosing the input type.
|
123 |
|
124 |
```python
|
125 |
-
# Step
|
126 |
input_type = ['cls_emb', 'channel_emb', 'raw'][1] # Modify input type as needed
|
127 |
dataset = dataset_gen(preprocessed_chs, input_type, model)
|
128 |
```
|
@@ -141,7 +155,7 @@ You can choose between:
|
|
141 |
Finally, use the generated dataset for your downstream tasks, such as classification, prediction, or analysis.
|
142 |
|
143 |
```python
|
144 |
-
# Step
|
145 |
print(f"Dataset generated with shape: {dataset.shape}")
|
146 |
print("Inference completed successfully.")
|
147 |
```
|
@@ -151,5 +165,4 @@ print("Inference completed successfully.")
|
|
151 |
## 📋 **Requirements**
|
152 |
|
153 |
- **Python 3.x**
|
154 |
-
- **PyTorch**
|
155 |
-
- **Git**
|
|
|
65 |
|
66 |
---
|
67 |
|
68 |
+
### 2. **Load the DeepMIMO Dataset**
|
69 |
|
70 |
+
Before tokenizing and processing the data, you need to load the **DeepMIMO** dataset. Below is a list of available datasets and their links for more information:
|
71 |
|
72 |
+
| **Dataset** | **City** | **Link to DeepMIMO Page** |
|
73 |
+
|--------------|---------------|----------------------------------------------------------------------------------------------------------------|
|
74 |
| Dataset 0 | Denver | [DeepMIMO City Scenario 18](https://www.deepmimo.net/scenarios/deepmimo-city-scenario18/) |
|
75 |
| Dataset 1 | Indianapolis | [DeepMIMO City Scenario 15](https://www.deepmimo.net/scenarios/deepmimo-city-scenario15/) |
|
76 |
| Dataset 2 | Oklahoma | [DeepMIMO City Scenario 19](https://www.deepmimo.net/scenarios/deepmimo-city-scenario19/) |
|
|
|
84 |
- **Subcarriers**: 32
|
85 |
- **Paths**: 20
|
86 |
|
87 |
+
#### **Load Data Code**:
|
88 |
+
Select and load specific datasets by adjusting the `dataset_idxs`. In the example below, we select the first two datasets.
|
89 |
|
90 |
```python
|
91 |
+
# Step 5: Load the DeepMIMO dataset
|
92 |
+
print("Loading the DeepMIMO dataset...")
|
93 |
|
94 |
# Load the DeepMIMO dataset
|
95 |
deepmimo_data = load_DeepMIMO_data()
|
96 |
|
97 |
+
# Select datasets to load
|
98 |
+
dataset_idxs = torch.arange(2) # Adjust the number of datasets as needed
|
99 |
+
print("DeepMIMO dataset loaded successfully.")
|
100 |
+
```
|
101 |
+
|
102 |
+
---
|
103 |
+
|
104 |
+
### 3. **Tokenize the DeepMIMO Dataset**
|
105 |
+
|
106 |
+
After loading the data, tokenize the selected **DeepMIMO** datasets. This step prepares the data for the model to process.
|
107 |
+
|
108 |
+
#### **Tokenization Code**:
|
109 |
+
|
110 |
+
```python
|
111 |
+
# Step 6: Tokenize the dataset
|
112 |
+
print("Tokenizing the DeepMIMO dataset...")
|
113 |
|
114 |
+
# Tokenize the loaded datasets
|
115 |
+
preprocessed_chs = tokenizer(deepmimo_data, dataset_idxs, gen_raw=True)
|
116 |
print("Dataset tokenized successfully.")
|
117 |
```
|
118 |
|
119 |
---
|
120 |
|
121 |
+
### 4. **Load the LWM Model**
|
122 |
|
123 |
+
Once the dataset is tokenized, load the pre-trained **LWM** model using the following code:
|
124 |
|
125 |
```python
|
126 |
+
# Step 7: Load the LWM model (with flexibility for the device)
|
127 |
device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
128 |
print(f"Loading the LWM model on {device}...")
|
129 |
model = LWM.from_pretrained(device=device)
|
|
|
131 |
|
132 |
---
|
133 |
|
134 |
+
### 5. **LWM Inference**
|
135 |
|
136 |
Once the dataset is tokenized and the model is loaded, generate either **raw channels** or the **inferred LWM embeddings** by choosing the input type.
|
137 |
|
138 |
```python
|
139 |
+
# Step 8: Generate the dataset for inference
|
140 |
input_type = ['cls_emb', 'channel_emb', 'raw'][1] # Modify input type as needed
|
141 |
dataset = dataset_gen(preprocessed_chs, input_type, model)
|
142 |
```
|
|
|
155 |
Finally, use the generated dataset for your downstream tasks, such as classification, prediction, or analysis.
|
156 |
|
157 |
```python
|
158 |
+
# Step 9: Print results
|
159 |
print(f"Dataset generated with shape: {dataset.shape}")
|
160 |
print("Inference completed successfully.")
|
161 |
```
|
|
|
165 |
## 📋 **Requirements**
|
166 |
|
167 |
- **Python 3.x**
|
168 |
+
- **PyTorch**
|
|