antoinelouis commited on
Commit
9c0b8f7
1 Parent(s): 8ee9315

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - fr
5
+ library_name: transformers
6
+ inference: false
7
+ pipeline_tag: feature-extraction
8
+ ---
9
+ # CamemBERTa-L2
10
+
11
+ This model is a pruned version of the pre-trained [CamemBERTa](https://huggingface.co/almanach/camemberta-base) checkpoint, obtained by [dropping the top-layers](https://doi.org/10.48550/arXiv.2004.03844) from the original model.
12
+
13
+ ## Usage
14
+
15
+ You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. For tasks such as text generation, you should look at autoregressive models like [BelGPT-2](https://huggingface.co/antoinelouis/belgpt2).
16
+
17
+ You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ unmasker = pipeline('fill-mask', model='antoinelouis/camemberta-L2')
23
+ unmasker("Bonjour, je suis un [MASK] modèle.")
24
+ ```
25
+
26
+ You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
27
+
28
+ ```python
29
+ from transformers import AutoTokenizer, AutoModel
30
+
31
+ tokenizer = AutoTokenizer.from_pretrained('antoinelouis/camemberta-L2')
32
+ model = AutoModel.from_pretrained('antoinelouis/camemberta-L2')
33
+
34
+ text = "Remplacez-moi par le texte de votre choix."
35
+ encoded_input = tokenizer(text, return_tensors='pt')
36
+ output = model(**encoded_input)
37
+ ```
38
+
39
+ ## Variations
40
+
41
+ CamemBERTa has originally been released in a base (112M) version. The following checkpoints prune the base variation by dropping the top 2, 4, 6, 8, and 10 pretrained encoding layers, respectively.
42
+
43
+ | Model | #Params | Size | Pruning |
44
+ |----------------------------------------------------------------------|:-------:|:-----:|:-------:|
45
+ | [CamemBERTa-base](https://huggingface.co/almanach/camemberta-base) | 111.8M | 447MB | - |
46
+ | | | | |
47
+ | [CamemBERTa-L10](https://huggingface.co/antoinelouis/camemberta-L10) | 97.6M | 386MB | -14% |
48
+ | [CamemBERTa-L8](https://huggingface.co/antoinelouis/camemberta-L8) | 83.5M | 334MB | -25% |
49
+ | [CamemBERTa-L6](https://huggingface.co/antoinelouis/camemberta-L6) | 69.3M | 277MB | -38% |
50
+ | [CamemBERTa-L4](https://huggingface.co/antoinelouis/camemberta-L4) | 55.1M | 220MB | -51% |
51
+ | [CamemBERTa-L2](https://huggingface.co/antoinelouis/camemberta-L2) | 40.9M | 164MB | -63% |