Datasets:
language:
- en
license:
- cc-by-4.0
size_categories:
- 10K<n<100k
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
Dataset Repository
This repository includes several datasets: Houston Crime Dataset, Tourism in Australia, Prison in Australia, and M5. These datasets consist of time series data representing various metrics across different categories and groups.
Dataset Structure
Each dataset is divided into training and prediction sets, with features such as groups, indices, and time series data. Below is a general overview of the dataset structure:
Training Data
The training data contains time series data with the following structure:
- x_values: List of time steps.
- groups_idx: Indices representing different group categories (e.g., Crime, Beat, Street, ZIP for Houston Crime).
- groups_n: Number of unique values in each group category.
- groups_names: Names corresponding to group indices.
- n: Number of time series.
- s: Length of each time series.
- n_series_idx: Indices of the time series.
- n_series: Indices for each series.
- g_number: Number of group categories.
- data: Matrix of time series data.
Prediction Data
The prediction data has a similar structure to the training data and is used for forecasting purposes.
Note: It contains the complete data, including training and prediction sets.
Additional Metadata
- seasonality: Seasonality of the data.
- h: Forecast horizon.
- dates: Timestamps corresponding to the time steps.
Example Usage
Below is an example of how to load and use the datasets using the datasets
library:
import pickle
def load_pickle(file_path):
with open(file_path, 'rb') as file:
data = pickle.load(file)
return data
# Paths to your datasets
m5_path = 'path/to/m5.pkl'
police_path = 'path/to/police.pkl'
prison_path = 'path/to/prison.pkl'
tourism_path = 'path/to/tourism.pkl'
m5_data = load_pickle(m5_path)
police_data = load_pickle(police_path)
prison_data = load_pickle(prison_path)
tourism_data = load_pickle(tourism_path)
# Example: Accessing specific data from the datasets
print("M5 Data:", m5_data)
print("Police Data:", police_data)
print("Prison Data:", prison_data)
print("Tourism Data:", tourism_data)
# Access the training data
train_data = prison_data["train"]
# Access the prediction data
predict_data = prison_data["predict"]
# Example: Extracting x_values and data
x_values = train_data["x_values"]
data = train_data["data"]
print(f"x_values: {x_values}")
print(f"data shape: {data.shape}")
Steps to Follow:
Clone the Repository:
git clone https://huggingface.co/datasets/zaai-ai/hierarchical_time_series_datasets.git cd hierarchical_time_series_datasets
Update the File Paths:
- Ensure the paths to the .pkl files are correct in your Python script.
Load the Datasets:
- Use the
pickle
library in Python to load the.pkl
files.
- Use the