Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
Working with dataset locally
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
cd danish-gigaword-2
You can the work with the dataset locally like so:
from datasets import load_dataset
name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
dataset = load_dataset("../.", split="train")
# make transformations here
Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using
dataset.cache_files
.