Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## RoBERTa Latin model, version 2 --> model card not finished yet
|
2 |
+
|
3 |
+
This is a Latin RoBERTa-based LM model, version 2.
|
4 |
+
|
5 |
+
The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results; on the other, it should be used as a decoder for the TrOCR architecture.
|
6 |
+
|
7 |
+
The training data is the same data as has been used by [Bamman and Burns (2020)](https://arxiv.org/pdf/2009.10053.pdf), although more heavily filtered (see below)
|
8 |
+
|
9 |
+
The overall corpus contains 2.5G of text data.
|
10 |
+
|
11 |
+
### Preprocessing
|
12 |
+
|
13 |
+
I undertook the following preprocessing steps:
|
14 |
+
|
15 |
+
- Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
|
16 |
+
- Use of [CLTK](http://www.cltk.org) for sentence splitting and normalisation.
|
17 |
+
- Retaining only lines containing letters of the Latin alphabet, numerals, and certain punctuation (--> `grep -P '^[A-z0-9ÄÖÜäöüÆ挜ᵫĀāūōŌ.,;:?!\- Ęę]+$' la.nolorem.tok.txt`
|
18 |
+
- deduplication of the corpus
|
19 |
+
|
20 |
+
The result is a corpus of ~390 million tokens.
|
21 |
+
|
22 |
+
The dataset used to train this model is available [HERE](https://huggingface.co/datasets/pstroe/cc100-latin).
|
23 |
+
|
24 |
+
### Contact
|
25 |
+
|
26 |
+
For contact, reach out to Phillip Ströbel [via mail](mailto:[email protected]) or [via Twitter](https://twitter.com/CLingophil).
|