File size: 3,051 Bytes
6c54243
 
 
 
35fbf56
34da571
f38e65e
35fbf56
0d50f46
 
 
 
12aa561
1f7a442
e92526c
34da571
 
e92526c
1f7a442
12aa561
f38e65e
585929e
e92526c
778eb88
a953281
bf17083
 
 
 
778eb88
bf17083
 
e92526c
bf17083
 
e92526c
bf17083
 
 
 
778eb88
bf17083
 
 
e92526c
bf17083
de1a966
778eb88
bf17083
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
language:
- en
---
# Adapting Multimodal Large Language Models to Domains via Post-Training

This repository provides an implementation preview of our paper, **On Domain-Specific Post-Training for Multimodal Large Language Models**.

We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. 
**(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.** 
**(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. 
**(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.

<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
</p>


<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/BzpZU5u7DrS6p0d58PQIs.png" width="900">
</p>


### Updates  
- **[2024/11/29]** Released our paper.


## About

AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training on synthetic supervised tasks derived from unsupervised contexts.

<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/2aPl6mKIyHeQp8SO4TXAk.png" width="700">
</p>


- [AdaptLLM](https://huggingface.co/papers/2309.09530)  
  We employ rule-based methods to extract tasks from domain-specific corpora, reformatting them into reading comprehension tasks for continued pre-training. Our 7B finance model outperforms domain-specific models of much larger scales, such as BloombergGPT-50B.

- AdaMLLM  
  We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.


## Citation
If you find our work helpful, please cite us.

[AdaptLLM](https://huggingface.co/papers/2309.09530) (ICLR 2024)
```bibtex
@inproceedings{
adaptllm,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```