VictorSanh commited on
Commit
02787cf
·
1 Parent(s): 338e8a2

updates to the readme

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -18,6 +18,8 @@ datasets:
18
 
19
  # IDEFICS
20
 
 
 
21
  IDEFICS (**I**mage-aware **D**ecoder **E**nhanced à la **F**lamingo with **I**nterleaved **C**ross-attention**S**) is an open-access reproduction of [Flamingo](https://huggingface.co/papers/2204.14198), a closed-source visual language model developed by Deepmind. Like GPT-4, the multimodal model accepts arbitrary sequences of image and text inputs and produces text outputs. IDEFICS is built solely on public available data and models.
22
 
23
  The model can answer questions about images, describe visual contents, create stories grounded on multiple images, or simply behave as a pure language model without visual inputs.
@@ -28,7 +30,7 @@ We also fine-tune these base models on a mixture of supervised and instruction f
28
 
29
  Read more about some of the technical challenges we encountered during training IDEFICS [here](https://github.com/huggingface/m4-logs/blob/master/memos/README.md).
30
 
31
- *How do I pronounce the name? [Youtube tutorial](https://www.youtube.com/watch?v=YKO0rWnPN2I&ab_channel=FrenchPronunciationGuide)*
32
 
33
  # Model Details
34
 
@@ -38,7 +40,7 @@ Read more about some of the technical challenges we encountered during training
38
  - **License:** see [License section](#license)
39
  - **Parent Model:** [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and [huggyllama/llama-65b](https://huggingface.co/huggyllama/llama-65b)
40
  - **Resources for more information:**
41
- - [GitHub Repo](https://github.com/huggingface/m4/)
42
  - Description of [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS): [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
43
  ](https://huggingface.co/papers/2306.16527)
44
  - Original Paper: [Flamingo: a Visual Language Model for Few-Shot Learning](https://huggingface.co/papers/2204.14198)
@@ -419,9 +421,11 @@ We release the additional weights we trained under an MIT license.
419
  }
420
  ```
421
 
422
- # Model Card Authors
 
 
423
 
424
- Stas Bekman, Léo Tronchon, Hugo Laurençon, Lucile Saulnier, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Giada Pistilli, Yacine Jernite, Sasha Luccioni, Ezi Ozoani, Younes Belkada, Sylvain Gugger, Amy E. Roberts, Lysandre Debut, Arthur Zucker, Lewis Tunstall, Zach Mueller, Sourab Mangrulkar, Yuvraj Sharma, Dawood Khan, Abubakar Abid, Daniel Van Strien, Chunte Lee, Douwe Kiela, Alexander M. Rush, Matthieu Cord, Victor Sanh
425
 
426
  # Model Card Contact
427
 
 
18
 
19
  # IDEFICS
20
 
21
+ *How do I pronounce the name? [Youtube tutorial](https://www.youtube.com/watch?v=YKO0rWnPN2I&ab_channel=FrenchPronunciationGuide)*
22
+
23
  IDEFICS (**I**mage-aware **D**ecoder **E**nhanced à la **F**lamingo with **I**nterleaved **C**ross-attention**S**) is an open-access reproduction of [Flamingo](https://huggingface.co/papers/2204.14198), a closed-source visual language model developed by Deepmind. Like GPT-4, the multimodal model accepts arbitrary sequences of image and text inputs and produces text outputs. IDEFICS is built solely on public available data and models.
24
 
25
  The model can answer questions about images, describe visual contents, create stories grounded on multiple images, or simply behave as a pure language model without visual inputs.
 
30
 
31
  Read more about some of the technical challenges we encountered during training IDEFICS [here](https://github.com/huggingface/m4-logs/blob/master/memos/README.md).
32
 
33
+ **Try out the [demo](https://huggingface.co/spaces/HuggingFaceM4/idefics_playground)!**
34
 
35
  # Model Details
36
 
 
40
  - **License:** see [License section](#license)
41
  - **Parent Model:** [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and [huggyllama/llama-65b](https://huggingface.co/huggyllama/llama-65b)
42
  - **Resources for more information:**
43
+ <!-- - [GitHub Repo](https://github.com/huggingface/m4/) -->
44
  - Description of [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS): [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
45
  ](https://huggingface.co/papers/2306.16527)
46
  - Original Paper: [Flamingo: a Visual Language Model for Few-Shot Learning](https://huggingface.co/papers/2204.14198)
 
421
  }
422
  ```
423
 
424
+ # Model Builders, Card Authors, and contributors
425
+
426
+ The core team (*) was supported in many different ways by these contributors at Hugging Face:
427
 
428
+ Stas Bekman*, Léo Tronchon*, Hugo Laurençon*, Lucile Saulnier*, Amanpreet Singh*, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Daniel Van Strien, Giada Pistilli, Yacine Jernite, Sasha Luccioni, Ezi Ozoani, Younes Belkada, Sylvain Gugger, Amy E. Roberts, Lysandre Debut, Arthur Zucker, Nicolas Patry, Lewis Tunstall, Zach Mueller, Sourab Mangrulkar, Chunte Lee, Yuvraj Sharma, Dawood Khan, Abubakar Abid, Ali Abid, Freddy Boulton, Omar Sanseviero, Carlos Muñoz Ferrandis, Guillaume Salou, Guillaume Legendre, Quentin Lhoest, Douwe Kiela, Alexander M. Rush, Matthieu Cord, Julien Chaumond, Thomas Wolf, Victor Sanh*
429
 
430
  # Model Card Contact
431