Datasets:
Modalities:
Text
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -62,10 +62,12 @@ Another important difference from typical entity extraction is that FiNER focuse
|
|
62 |
|
63 |
### Supported Tasks
|
64 |
|
|
|
65 |
To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information.
|
66 |
However, manually tagging reports with XBRL tags is tedious and resource-intensive.
|
67 |
We, therefore, introduce **XBRL tagging** as a **new entity extraction task** for the **financial domain** and study how financial reports can be automatically enriched with XBRL tags.
|
68 |
To facilitate research towards automated XBRL tagging we release FiNER-139.
|
|
|
69 |
|
70 |
### Languages
|
71 |
|
@@ -125,17 +127,21 @@ The dataset was curated by [Loukas et al. (2022)](https://arxiv.org/abs/2203.064
|
|
125 |
|
126 |
#### Initial Data Collection and Normalization
|
127 |
|
|
|
128 |
FiNER-139 is compiled from approx. 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities
|
129 |
and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system. <br>
|
130 |
The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approx. 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances. <br>
|
131 |
We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the **IOB2** annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
|
|
|
132 |
|
133 |
### Annotations
|
134 |
|
135 |
#### Annotation process
|
136 |
|
|
|
137 |
All the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation. <br>
|
138 |
Even though the gold XBRL tags come from professional auditors there are still some discrepancies. [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482), Section 9.4 Annotation inconsistencies for more details
|
|
|
139 |
|
140 |
#### Who are the annotators?
|
141 |
|
@@ -153,13 +159,17 @@ The dataset contains publicly available annual and quarterly reports (filings)
|
|
153 |
|
154 |
### Licensing Information
|
155 |
|
|
|
156 |
Access to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC.
|
|
|
157 |
|
158 |
### Citation Information
|
159 |
|
|
|
160 |
FiNER: Financial Numeric Entity Recognition for XBRL Tagging <br>
|
161 |
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras <br>
|
162 |
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
|
|
|
163 |
|
164 |
```
|
165 |
@inproceedings{loukas-etal-2022-finer,
|
@@ -182,6 +192,7 @@ In the Proceedings of the 60th Annual Meeting of the Association for Computation
|
|
182 |
## SEC-BERT
|
183 |
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/>
|
184 |
|
|
|
185 |
We also pre-train our own BERT models (**SEC-BERT**) for the financial domain, intended to assist financial NLP research and FinTech applications. <br>
|
186 |
**SEC-BERT** consists of the following models:
|
187 |
|
@@ -190,9 +201,10 @@ We also pre-train our own BERT models (**SEC-BERT**) for the financial domain, i
|
|
190 |
* [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
|
191 |
|
192 |
These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at [U.S. Securities and Exchange Commission (SEC)](https://www.sec.gov/)
|
193 |
-
|
194 |
## About Us
|
195 |
|
|
|
196 |
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
|
197 |
|
198 |
The group's current research interests include:
|
@@ -204,5 +216,6 @@ text classification, including filtering spam and abusive content,
|
|
204 |
machine learning in natural language processing, especially deep learning.
|
205 |
|
206 |
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
|
|
|
207 |
|
208 |
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
|
|
|
62 |
|
63 |
### Supported Tasks
|
64 |
|
65 |
+
<div style="text-align: justify">
|
66 |
To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information.
|
67 |
However, manually tagging reports with XBRL tags is tedious and resource-intensive.
|
68 |
We, therefore, introduce **XBRL tagging** as a **new entity extraction task** for the **financial domain** and study how financial reports can be automatically enriched with XBRL tags.
|
69 |
To facilitate research towards automated XBRL tagging we release FiNER-139.
|
70 |
+
</div>
|
71 |
|
72 |
### Languages
|
73 |
|
|
|
127 |
|
128 |
#### Initial Data Collection and Normalization
|
129 |
|
130 |
+
<div style="text-align: justify">
|
131 |
FiNER-139 is compiled from approx. 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities
|
132 |
and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system. <br>
|
133 |
The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approx. 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances. <br>
|
134 |
We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the **IOB2** annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
|
135 |
+
</div>
|
136 |
|
137 |
### Annotations
|
138 |
|
139 |
#### Annotation process
|
140 |
|
141 |
+
<div style="text-align: justify">
|
142 |
All the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation. <br>
|
143 |
Even though the gold XBRL tags come from professional auditors there are still some discrepancies. [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482), Section 9.4 Annotation inconsistencies for more details
|
144 |
+
</div>
|
145 |
|
146 |
#### Who are the annotators?
|
147 |
|
|
|
159 |
|
160 |
### Licensing Information
|
161 |
|
162 |
+
<div style="text-align: justify">
|
163 |
Access to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC.
|
164 |
+
</div>
|
165 |
|
166 |
### Citation Information
|
167 |
|
168 |
+
<div style="text-align: justify">
|
169 |
FiNER: Financial Numeric Entity Recognition for XBRL Tagging <br>
|
170 |
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras <br>
|
171 |
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
|
172 |
+
</div>
|
173 |
|
174 |
```
|
175 |
@inproceedings{loukas-etal-2022-finer,
|
|
|
192 |
## SEC-BERT
|
193 |
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/>
|
194 |
|
195 |
+
<div style="text-align: justify">
|
196 |
We also pre-train our own BERT models (**SEC-BERT**) for the financial domain, intended to assist financial NLP research and FinTech applications. <br>
|
197 |
**SEC-BERT** consists of the following models:
|
198 |
|
|
|
201 |
* [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
|
202 |
|
203 |
These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at [U.S. Securities and Exchange Commission (SEC)](https://www.sec.gov/)
|
204 |
+
</div>
|
205 |
## About Us
|
206 |
|
207 |
+
<div style="text-align: justify">
|
208 |
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
|
209 |
|
210 |
The group's current research interests include:
|
|
|
216 |
machine learning in natural language processing, especially deep learning.
|
217 |
|
218 |
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
|
219 |
+
</div>
|
220 |
|
221 |
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
|