AdaptLLM commited on
Commit
4c20326
·
verified ·
1 Parent(s): c2ba972

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -24,13 +24,27 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
24
  </p>
25
 
26
 
27
- ### Updates
 
28
  - [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
29
  - [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
30
  - [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) for post-training MLLMs
31
  - [2024/11/29] Released our paper
32
 
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ## About
35
 
36
  AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.
 
24
  </p>
25
 
26
 
27
+ ***************** **Updates** ********************
28
+ - [2024/12/9] Released AdaMLLM developed from llava-next-llama3-8b: [AdaMLLM-med-8B](AdaptLLM/medicine-LLaVA-NeXT-Llama3-8B), [AdaMLLM-food-8B](AdaptLLM/food-LLaVA-NeXT-Llama3-8B).
29
  - [2024/12/7] Released [visual-instruction-synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) used to synthesize task triplets based on image-caption pairs.
30
  - [2024/12/6] Released AdaMLLM developed from Qwen2-VL-2B and Llama-3.2-11B-Vision: [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct), [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct), [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct), [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct),
31
  - [2024/12/05] Released [biomedicine visual instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) for post-training MLLMs
32
  - [2024/11/29] Released our paper
33
 
34
 
35
+ ## Resources
36
+ | Model Name | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
37
+ |:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
38
+ | [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | TBD | - |
39
+ | [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/medicine-Qwen2-VL-2B-Instruct) | AdaptLLM/medicine-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
40
+ | [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct) | AdaptLLM/food-Qwen2-VL-2B-Instruct | Food | Qwen2-VL-2B-Instruct | TBD | TBD |
41
+ | [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/medicine-LLaVA-NeXT-Llama3-8B) | AdaptLLM/medicine-LLaVA-NeXT-Llama3-8B | Biomedicine | open-llava-next-llama3-8b | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
42
+ | [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B) |AdaptLLM/food-LLaVA-NeXT-Llama3-8B | Food | open-llava-next-llama3-8b | TBD | TBD |
43
+ | [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/medicine-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions) | TBD |
44
+ | [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | TBD | TBD |
45
+
46
+
47
+
48
  ## About
49
 
50
  AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.