gabrielmotablima commited on
Commit
a972777
·
verified ·
1 Parent(s): d6e1557

update readme

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -14,21 +14,22 @@ base_model:
14
  pipeline_tag: text-generation
15
  ---
16
 
17
- # Swin-DistilBERTimbau
18
 
19
  **Swin-DistilBERTimbau** model trained on [**Flickr30K Portuguese**](https://huggingface.co/datasets/laicsiifes/flickr30k-pt-br) (translated version using Google Translator API)
20
  at resolution 224x224 and max sequence length of 512 tokens.
21
 
22
 
23
- ## Model Description
24
 
25
  The Swin-DistilBERTimbau is a type of Vision Encoder Decoder which leverage the checkpoints of the [Swin Transformer](https://huggingface.co/microsoft/swin-base-patch4-window7-224)
26
  as encoder and the checkpoints of the [DistilBERTimbau](https://huggingface.co/adalbertojunior/distilbert-portuguese-cased) as decoder.
27
  The encoder checkpoints come from Swin Trasnformer version pre-trained on ImageNet-1k at resolution 224x224.
28
 
29
- The code used for training and evaluation is available at: https://github.com/laicsiifes/ved-transformer-caption-ptbr.
 
30
 
31
- ## How to Get Started with the Model
32
 
33
  Use the code below to get started with the model.
34
 
@@ -54,16 +55,16 @@ generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
54
  print(generated_text)
55
  ```
56
 
57
- ## Results
58
 
59
  The evaluation metrics Cider-D, BLEU@4, ROUGE-L, METEOR and BERTScore are abbreviated as C, B@4, RL, M and BS, respectively.
60
 
61
  |Model|Training|Evaluation|C|B@4|RL|M|BS|
62
  |-----|--------|----------|-------|------|-------|------|---------|
63
  |Swin-DistilBERTimbau|Flickr30K Portuguese|Flickr30K Portuguese|66.73|24.65|39.98|44.71|72.30|
64
- |Swin-GPT-2|Flickr30K Portuguese|Flickr30K Portuguese|64.71|23.15|39.39|44.36|71.70|
65
 
66
- ## BibTeX entry and citation info
67
 
68
  ```bibtex
69
  Coming Soon
 
14
  pipeline_tag: text-generation
15
  ---
16
 
17
+ # 🎉Swin-DistilBERTimbau
18
 
19
  **Swin-DistilBERTimbau** model trained on [**Flickr30K Portuguese**](https://huggingface.co/datasets/laicsiifes/flickr30k-pt-br) (translated version using Google Translator API)
20
  at resolution 224x224 and max sequence length of 512 tokens.
21
 
22
 
23
+ ## 🤖 Model Description
24
 
25
  The Swin-DistilBERTimbau is a type of Vision Encoder Decoder which leverage the checkpoints of the [Swin Transformer](https://huggingface.co/microsoft/swin-base-patch4-window7-224)
26
  as encoder and the checkpoints of the [DistilBERTimbau](https://huggingface.co/adalbertojunior/distilbert-portuguese-cased) as decoder.
27
  The encoder checkpoints come from Swin Trasnformer version pre-trained on ImageNet-1k at resolution 224x224.
28
 
29
+ The code used for training and evaluation is available at: https://github.com/laicsiifes/ved-transformer-caption-ptbr. In this work, Swin-DistilBERTimbau
30
+ was trained together with its buddy [Swin-GPorTuguese](https://huggingface.co/laicsiifes/swin-gpt2-flickr30k-pt-br).
31
 
32
+ ## 🧑‍💻 How to Get Started with the Model
33
 
34
  Use the code below to get started with the model.
35
 
 
55
  print(generated_text)
56
  ```
57
 
58
+ ## 📈 Results
59
 
60
  The evaluation metrics Cider-D, BLEU@4, ROUGE-L, METEOR and BERTScore are abbreviated as C, B@4, RL, M and BS, respectively.
61
 
62
  |Model|Training|Evaluation|C|B@4|RL|M|BS|
63
  |-----|--------|----------|-------|------|-------|------|---------|
64
  |Swin-DistilBERTimbau|Flickr30K Portuguese|Flickr30K Portuguese|66.73|24.65|39.98|44.71|72.30|
65
+ |Swin-GPorTuguese|Flickr30K Portuguese|Flickr30K Portuguese|64.71|23.15|39.39|44.36|71.70|
66
 
67
+ ## 📋 BibTeX entry and citation info
68
 
69
  ```bibtex
70
  Coming Soon