File size: 1,129 Bytes
260e6da 480dde7 260e6da 480dde7 260e6da 88b07ff 260e6da 88b07ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 194746016
num_examples: 418357
download_size: 89077742
dataset_size: 194746016
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
# fingpt - all - prompt/response format
loaded/created via:
```py
from datasets import load_dataset
dataset_names = [
"FinGPT/fingpt-sentiment-train",
"FinGPT/fingpt-fiqa_qa",
"FinGPT/fingpt-headline-cls",
"FinGPT/fingpt-convfinqa",
"FinGPT/fingpt-finred-cls",
"FinGPT/fingpt-ner",
"FinGPT/fingpt-finred",
"FinGPT/fingpt-sentiment-cls",
"FinGPT/fingpt-ner-cls",
"FinGPT/fingpt-finred-re",
"FinGPT/fingpt-headline"
]
ds_list = []
for ds_name in dataset_names:
ds = load_dataset(ds_name, split="train")
ds = ds.map(lambda x: {'source': ds_name}, num_proc=8)
ds_list.append(ds)
ds_list
```
See [fingpt](https://huggingface.co/FinGPT) page for details. |