AdaptLLM commited on
Commit
3bf3a0a
·
verified ·
1 Parent(s): d167b68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -41,10 +41,6 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
41
 
42
  **Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
43
 
44
-
45
- ## Contact
46
- Daixuan Cheng: `[email protected]`
47
-
48
  ## About
49
 
50
  AdaMLLM is our latest effort to enhance task generalization of (M)LLMs by scaling synthetic supervised tasks based on unsupervised contexts.
@@ -64,10 +60,14 @@ AdaMLLM is our latest effort to enhance task generalization of (M)LLMs by scalin
64
 
65
  Looking ahead, we envision further broadening the scope of supervised task synthesis, efficiently enhancing the general capabilities of trained models.
66
 
 
 
 
 
67
  ## Citation
68
  If you find our work helpful, please cite us.
69
 
70
- [AdaMLLM](https://huggingface.co/papers/2411.19930)
71
  ```bibtex
72
  @article{adamllm,
73
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
@@ -79,7 +79,7 @@ If you find our work helpful, please cite us.
79
 
80
  [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
81
  ```bibtex
82
- @article{cheng2024instruction,
83
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
84
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
85
  journal={arXiv preprint arXiv:2406.14491},
@@ -90,7 +90,7 @@ If you find our work helpful, please cite us.
90
  [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
91
  ```bibtex
92
  @inproceedings{
93
- cheng2024adapting,
94
  title={Adapting Large Language Models via Reading Comprehension},
95
  author={Daixuan Cheng and Shaohan Huang and Furu Wei},
96
  booktitle={The Twelfth International Conference on Learning Representations},
 
41
 
42
  **Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
43
 
 
 
 
 
44
  ## About
45
 
46
  AdaMLLM is our latest effort to enhance task generalization of (M)LLMs by scaling synthetic supervised tasks based on unsupervised contexts.
 
60
 
61
  Looking ahead, we envision further broadening the scope of supervised task synthesis, efficiently enhancing the general capabilities of trained models.
62
 
63
+ ## Contact
64
+ Daixuan Cheng: `[email protected]`
65
+
66
+
67
  ## Citation
68
  If you find our work helpful, please cite us.
69
 
70
+ [Adapt MLLM to Domains](https://huggingface.co/papers/2411.19930)
71
  ```bibtex
72
  @article{adamllm,
73
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
 
79
 
80
  [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
81
  ```bibtex
82
+ @article{instructPT,
83
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
84
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
85
  journal={arXiv preprint arXiv:2406.14491},
 
90
  [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
91
  ```bibtex
92
  @inproceedings{
93
+ adaptllm,
94
  title={Adapting Large Language Models via Reading Comprehension},
95
  author={Daixuan Cheng and Shaohan Huang and Furu Wei},
96
  booktitle={The Twelfth International Conference on Learning Representations},