File size: 2,510 Bytes
997f7bc
 
 
 
 
 
 
 
d349c42
997f7bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9ca440
 
 
 
 
 
 
 
b220bd2
 
a9ca440
 
 
 
 
 
ead6d50
 
a9ca440
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
language: 
  - "lb"
license: "mit"
tags:
- "luxembourgish"
- "lëtzebuergesch"
- "text generation"
- "transfer learning"
model-index:
- name: "LuxGPT2-basedEN"
  results:
  - task:
      type: "text-generation"            # Required. Example: automatic-speech-recognition
      name: "Text Generation"             # Optional. Example: Speech Recognition
    dataset:
      type: "LuxembourgishTestDataset"          # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
      name: "Luxembourgish Test Dataset"          # Required. A pretty name for the dataset. Example: Common Voice (French)
    metrics:
      - type: "accuracy"        # Required. Example: wer. Use metric id from https://hf.co/metrics
        value: "0.35"      # Required. Example: 20.90
- name: "LuxGPT2-basedEN"
  results:
  - task:
      type: "text-generation"            # Required. Example: automatic-speech-recognition
      name: "Text Generation"             # Optional. Example: Speech Recognition
    dataset:
      type: "LuxembourgishTestDataset"          # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
      name: "Luxembourgish Test Dataset"          # Required. A pretty name for the dataset. Example: Common Voice (French)
    metrics:
      - type: "perplexity"        # Required. Example: wer. Use metric id from https://hf.co/metrics
        value: "45.08"      # Required. Example: 20.90
---

---
## LuxGPT-2 based GER
GPT-2 model for Text Generation in luxembourgish language, trained on 711 MB of text data, consisting of RTL.lu news articles, comments, parlament speeches, the luxembourgish Wikipedia, Newscrawl, Webcrawl and subtitles. Created via transfer learning with an English base model, feature space mapping from LB on Base feature space and gradual layer freezing.
The training took place on a 32 GB Nvidia Tesla V100
- with One Cycle policy for the learning rate
- with the help of fastai's LR finder
- for 49.2 hours
- for 18 epochs and 8 cycles
- using the fastai library


## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("laurabernardy/LuxGPT2-basedEN")
model = AutoModelForCausalLM.from_pretrained("laurabernardy/LuxGPT2-basedEN")
```
## Limitations and Biases
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.