File size: 1,868 Bytes
865cdea
 
 
 
 
 
14a95ab
865cdea
14a95ab
865cdea
 
 
 
 
def7d4c
7729aa8
 
 
503d71b
865cdea
61489ed
865cdea
61489ed
865cdea
 
 
b86e53f
 
 
 
 
 
58b10d6
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
## RoBERTa Latin model, version 2 --> model card not finished yet

This is a Latin RoBERTa-based LM model, version 2.

The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results; on the other, it should be used as a decoder for the TrOCR architecture.

The training data is more or less the same data as has been used by [Bamman and Burns (2020)](https://arxiv.org/pdf/2009.10053.pdf), although more heavily filtered (see below). There are several digital-born texts from online Latin archives. Other Latin texts have been crawled by [Bamman and Smith](https://www.cs.cmu.edu/~dbamman/latin.html) and thus contain many OCR errors.

The overall downsampled corpus contains 577M of text data.

### Preprocessing

I undertook the following preprocessing steps:

  - Normalisation of all lines with [CLTK](http://www.cltk.org) incl. sentence splitting.
  - Language identification with [langid](https://github.com/saffsd/langid.py)
  - Compute the ratio of Latin vocabulary in each sentence (against the digital-born vocab of the corpus)
  - Retain only sentences with a Latin vocabulary ratio of > 85%.
  - Exclude all lines containing '^' --> hints at the presence of OCR errors.

The result is a corpus of ~100 million tokens.

The dataset used to train this will be available on Hugging Face later [HERE (does not work yet)]().

### Contact

For contact, reach out to Phillip Ströbel [via mail](mailto:[email protected]) or [via Twitter](https://twitter.com/CLingophil).

### How to cite

If you use this model, pleas cite it as:

    @online{stroebel-roberta-base-latin-cased2,
        author = {Ströbel, Phillip Benjamin},
        title = {RoBERTa Base Latin Cased V2},
        year = 2022,
        url = {https://huggingface.co/pstroe/roberta-base-latin-cased2},
        urldate = {YYYY-MM-DD}
    }