File size: 5,894 Bytes
cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 bfaa02d cd5c518 bfaa02d cd5c518 cd7cb8b bfaa02d cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 12672eb cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 cd7cb8b cd5c518 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
# π‘ **LWM: Large Wireless Model**
Welcome to the **LWM** (Large Wireless Model) repository! This project hosts a pre-trained model designed to process and extract features from wireless communication datasets, specifically the **DeepMIMO** dataset. Follow the instructions below to clone the repository, load the data, and perform inference with LWM.
---
## π **How to Use**
### 1. **Clone the Repository**
To get started, clone the Hugging Face repository to your local machine with the following Python code:
```python
import subprocess
import os
import sys
import importlib.util
import torch
# Hugging Face public repository URL
repo_url = "https://huggingface.co/sadjadalikhani/LWM"
# Directory where the repo will be cloned
clone_dir = "./LWM"
# Step 1: Clone the repository if it hasn't been cloned already
if not os.path.exists(clone_dir):
print(f"Cloning repository from {repo_url} into {clone_dir}...")
result = subprocess.run(["git", "clone", repo_url, clone_dir], capture_output=True, text=True)
if result.returncode != 0:
print(f"Error cloning repository: {result.stderr}")
sys.exit(1)
print(f"Repository cloned successfully into {clone_dir}")
else:
print(f"Repository already cloned into {clone_dir}")
# Step 2: Add the cloned directory to Python path
sys.path.append(clone_dir)
# Step 3: Import necessary functions
def import_functions_from_file(module_name, file_path):
try:
spec = importlib.util.spec_from_file_location(module_name, file_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
for function_name in dir(module):
if callable(getattr(module, function_name)) and not function_name.startswith("__"):
globals()[function_name] = getattr(module, function_name)
return module
except FileNotFoundError:
print(f"Error: {file_path} not found!")
sys.exit(1)
# Step 4: Import functions from the repository
import_functions_from_file("lwm_model", os.path.join(clone_dir, "lwm_model.py"))
import_functions_from_file("inference", os.path.join(clone_dir, "inference.py"))
import_functions_from_file("load_data", os.path.join(clone_dir, "load_data.py"))
import_functions_from_file("input_preprocess", os.path.join(clone_dir, "input_preprocess.py"))
print("All required functions imported successfully.")
```
---
### 2. **Load the LWM Model**
Once the repository is cloned, load the pre-trained **LWM** model using the following code:
```python
# Step 5: Load the LWM model (with flexibility for the device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f"Loading the LWM model on {device}...")
model = LWM.from_pretrained(device=device)
```
---
### 3. **Load the DeepMIMO Dataset**
Load the DeepMIMO dataset using the pre-defined loading function:
```python
# Step 6: Load dataset (direct call, no module prefix)
print("Loading DeepMIMO dataset...")
deepmimo_data = load_DeepMIMO_data()
```
---
### 4. **Tokenize the DeepMIMO Dataset**
Tokenize the dataset based on specific scenarios from DeepMIMO. Below is a list of available scenarios and their links for more information:
| **Scenario** | **City** | **Link to DeepMIMO Page** |
|---------------|---------------|----------------------------------------------------------------------------------------------------------------|
| Scenario 0 | Denver | [DeepMIMO City Scenario 18](https://www.deepmimo.net/scenarios/deepmimo-city-scenario18/) |
| Scenario 1 | Indianapolis | [DeepMIMO City Scenario 15](https://www.deepmimo.net/scenarios/deepmimo-city-scenario15/) |
| Scenario 2 | Oklahoma | [DeepMIMO City Scenario 19](https://www.deepmimo.net/scenarios/deepmimo-city-scenario19/) |
| Scenario 3 | Fort Worth | [DeepMIMO City Scenario 12](https://www.deepmimo.net/scenarios/deepmimo-city-scenario12/) |
| Scenario 4 | Santa Clara | [DeepMIMO City Scenario 11](https://www.deepmimo.net/scenarios/deepmimo-city-scenario11/) |
| Scenario 5 | San Diego | [DeepMIMO City Scenario 7](https://www.deepmimo.net/scenarios/deepmimo-city-scenario7/) |
#### **Operational Settings**:
- **Antennas at BS**: 32
- **Antennas at UEs**: 1
- **Subcarriers**: 32
- **Paths**: 20
#### **Tokenization Code**:
Select and tokenize specific scenarios by adjusting the `scenario_idxs`. In the example below, we select the first two scenarios.
```python
# Step 7: Tokenize the dataset
scenario_idxs = torch.arange(2) # Adjust the number of scenarios you want
print("Tokenizing the dataset...")
preprocessed_chs = tokenizer(deepmimo_data, scenario_idxs, gen_raw=True)
```
- The dataset will be tokenized according to the selected scenarios and preprocessing configurations.
---
### 5. **LWM Inference**
Once the dataset is tokenized, generate either **raw channels** or the **inferred LWM embeddings** by choosing the input type.
```python
# Step 8: Generate the dataset for inference
input_type = ['cls_emb', 'channel_emb', 'raw'][1] # Modify input type as needed
dataset = dataset_gen(preprocessed_chs, input_type, model)
```
You can choose between:
- `cls_emb`: LWM CLS token embeddings
- `channel_emb`: LWM channel embeddings
- `raw`: Raw wireless channel data
---
## π **Post-processing for Downstream Task**
### 1. **Use the Dataset in Downstream Tasks**
Finally, use the generated dataset for your downstream tasks, such as classification, prediction, or analysis.
```python
# Step 9: Print results
print(f"Dataset generated with shape: {dataset.shape}")
print("Inference completed successfully.")
```
---
## π **Requirements**
- **Python 3.x**
- **PyTorch**
|