Datasets:
annotations_creators:
- expert-generated
language_creators:
- expert-generated
languages:
- en
licenses:
- cc-by-sa-4-0
multilinguality:
- monolingual
pretty_name: FiNER-139
size_categories:
- 1M<n<10M
source_datasets: []
task_categories:
- structure-prediction
- named-entity-recognition
- entity-extraction
task_ids:
- named-entity-recognition
Dataset Card for FiNER-139
Table of Contents
Dataset Description
- Homepage: FiNER
- Repository: FiNER
- Paper: FiNER, Loukas et al. (2022)
- Point of Contact: Manos Fergadiotis
Dataset Summary
Supported Tasks
Languages
FiNER-139 is compiled from approx. 10k annual and quarterly English reports
Dataset Structure
Data Instances
This is a "train" split example:
{
'id': 40
'tokens': ['In', 'March', '2014', ',', 'the', 'Rialto', 'segment', 'issued', 'an', 'additional', '$', '100', 'million', 'of', 'the', '7.00', '%', 'Senior', 'Notes', ',', 'at', 'a', 'price', 'of', '102.25', '%', 'of', 'their', 'face', 'value', 'in', 'a', 'private', 'placement', '.']
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
Data Fields
id: ID of the example
tokens: List of tokens for the specific example.
ner_tags: List of tags for each token in the example. Tags are provided as integer classes.
Tag Names
If you want to use the class names you can access them as follows:
import datasets
finer_train = datasets.load_dataset("nlpaueb/finer-139", split="train")
finer_tag_names = finer_train.features["ner_tags"].feature.names
finer_tag_names contains a list of class names corresponding to the integer classes e.g.
0 -> "O" <br>
1 -> "B-AccrualForEnvironmentalLossContingencies" <br>
Data Splits
Training | Validation | Test |
---|---|---|
900,384 | 112,494 | 108,378 |
Dataset Creation
Curation Rationale
The dataset was curated by Loukas et al. (2022)
Source Data
Initial Data Collection and Normalization
The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approx. 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances.
We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the IOB2 annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
Annotations
Annotation process
Even though the gold XBRL tags come from professional auditors there are still some discrepancies. [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482), Section 9.4 Annotation inconsistencies for more details
Who are the annotators?
Professional auditors
Personal and Sensitive Information
The dataset contains publicly available annual and quarterly reports (filings)
Additional Information
Dataset Curators
Licensing Information
Citation Information
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
SEC-BERT
SEC-BERT consists of the following models:
- SEC-BERT-BASE: Same architecture as BERT-BASE trained on financial documents.
- SEC-BERT-NUM: Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation
- SEC-BERT-SHAPE: Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at U.S. Securities and Exchange Commission (SEC)
About Us
AUEB's Natural Language Processing Group develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
- question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
- natural language generation from databases and ontologies, especially Semantic Web ontologies, text classification, including filtering spam and abusive content,
- information extraction and opinion mining, including legal text analytics and sentiment analysis,
- natural language processing tools for Greek, for example parsers and named-entity recognizers, machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
Manos Fergadiotis on behalf of AUEB's Natural Language Processing Group