esuriddick commited on
Commit
f75676a
1 Parent(s): 5ba617d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -41,11 +41,27 @@ It achieves the following results on the evaluation set:
41
 
42
  ## Model description
43
 
44
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## Intended uses & limitations
47
-
48
- More information needed
49
 
50
  ## Training and evaluation data
51
 
 
41
 
42
  ## Model description
43
 
44
+ DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
45
+ self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
46
+ with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
47
+ process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
48
+ with three objectives:
49
+
50
+ - Distillation loss: the model was trained to return the same probabilities as the BERT base model.
51
+ - Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
52
+ sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
53
+ model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
54
+ usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
55
+ tokens. It allows the model to learn a bidirectional representation of the sentence.
56
+ - Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
57
+ model.
58
+
59
+ This way, the model learns the same inner representation of the English language than its teacher model, while being
60
+ faster for inference or downstream tasks.
61
 
62
  ## Intended uses & limitations
63
+ [Emotion](https://huggingface.co/datasets/dair-ai/emotion) is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. This dataset was developed for the paper entitled "CARER: Contextualized Affect Representations for Emotion Recognition" (Saravia et al.) through noisy labels, annotated via distant
64
+ supervision as in the paper"Twitter sentiment classification using distant supervision" (Go et al).
65
 
66
  ## Training and evaluation data
67