fin / README.md
asahi417's picture
init
0bf19bb
|
raw
history blame
2.04 kB
---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: FIN
---
# Dataset Card for "tner/fin"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/U15-1010.pdf](https://aclanthology.org/U15-1010.pdf)
- **Dataset:** FIN
- **Domain:** Financial News
- **Number of Entity:** 4 (`ORG`, `LOC`, `PER`, `MISC`)
### Dataset Summary
FIN NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
Original FIN dataset contains two variants of datasets, FIN3 and FIN5 where the FIN3 is the test set, while FIN5 is the training set.
We take same amount of instances randomly from the training set and create a validation set with the subset.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
"tags": [0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["1", ".", "1", ".", "4", "Borrower", "engages", "in", "criminal", "conduct", "or", "is", "involved", "in", "criminal", "activities", ";"]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/fin/raw/main/dataset/label.json).
```python
{
"O": 0,
"I-ORG": 1,
"I-LOC": 2,
"I-PER": 3,
"I-MISC": 4
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|fin |861 | 303| 303|
### Citation Information
```
@inproceedings{salinas-alvarado-etal-2015-domain,
title = "Domain Adaption of Named Entity Recognition to Support Credit Risk Assessment",
author = "Salinas Alvarado, Julio Cesar and
Verspoor, Karin and
Baldwin, Timothy",
booktitle = "Proceedings of the Australasian Language Technology Association Workshop 2015",
month = dec,
year = "2015",
address = "Parramatta, Australia",
url = "https://aclanthology.org/U15-1010",
pages = "84--90",
}
```