Datasets:
File size: 3,190 Bytes
0b060a7 a3d6e71 0b060a7 942330b 0659855 942330b 0659855 942330b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
language:
- en
license:
- cc-by-4.0
size_categories:
- 10K<n<100k
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
---
# Dataset Repository
This repository includes several datasets: **Houston Crime Dataset**, **Tourism in Australia**, **Prison in Australia**, and **M5**. These datasets consist of time series data representing various metrics across different categories and groups.
## Dataset Structure
Each dataset is divided into training and prediction sets, with features such as groups, indices, and time series data. Below is a general overview of the dataset structure:
### Training Data
The training data contains time series data with the following structure:
- **x_values**: List of time steps.
- **groups_idx**: Indices representing different group categories (e.g., Crime, Beat, Street, ZIP for Houston Crime).
- **groups_n**: Number of unique values in each group category.
- **groups_names**: Names corresponding to group indices.
- **n**: Number of time series.
- **s**: Length of each time series.
- **n_series_idx**: Indices of the time series.
- **n_series**: Indices for each series.
- **g_number**: Number of group categories.
- **data**: Matrix of time series data.
### Prediction Data
The prediction data has a similar structure to the training data and is used for forecasting purposes.
**Note:** It contains the complete data, including training and prediction sets.
### Additional Metadata
- **seasonality**: Seasonality of the data.
- **h**: Forecast horizon.
- **dates**: Timestamps corresponding to the time steps.
## Example Usage
Below is an example of how to load and use the datasets using the `datasets` library:
```python
import pickle
def load_pickle(file_path):
with open(file_path, 'rb') as file:
data = pickle.load(file)
return data
# Paths to your datasets
m5_path = 'path/to/m5.pkl'
police_path = 'path/to/police.pkl'
prison_path = 'path/to/prison.pkl'
tourism_path = 'path/to/tourism.pkl'
m5_data = load_pickle(m5_path)
police_data = load_pickle(police_path)
prison_data = load_pickle(prison_path)
tourism_data = load_pickle(tourism_path)
# Example: Accessing specific data from the datasets
print("M5 Data:", m5_data)
print("Police Data:", police_data)
print("Prison Data:", prison_data)
print("Tourism Data:", tourism_data)
# Access the training data
train_data = prison_data["train"]
# Access the prediction data
predict_data = prison_data["predict"]
# Example: Extracting x_values and data
x_values = train_data["x_values"]
data = train_data["data"]
print(f"x_values: {x_values}")
print(f"data shape: {data.shape}")
```
### Steps to Follow:
1. **Clone the Repository:**
```sh
git clone https://huggingface.co/datasets/zaai-ai/hierarchical_time_series_datasets.git
cd hierarchical_time_series_datasets
```
2. **Update the File Paths:**
- Ensure the paths to the .pkl files are correct in your Python script.
3. **Load the Datasets:**
- Use the `pickle` library in Python to load the `.pkl` files.
|