ZhangYuanhan
commited on
Commit
•
d10c7bb
1
Parent(s):
d593f18
Update README.md
Browse files
README.md
CHANGED
@@ -130,7 +130,7 @@ model-index:
|
|
130 |
The LLaVA-OneVision models are 7/72B parameter models trained on [LLaVA-NeXT-Video-SFT](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Video-SFT-Data), based on Qwen2 language model with a context window of 32K tokens.
|
131 |
|
132 |
- **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
|
133 |
-
- **Point of Contact:** [Yuanhan Zhang](
|
134 |
- **Languages:** English, Chinese
|
135 |
|
136 |
|
|
|
130 |
The LLaVA-OneVision models are 7/72B parameter models trained on [LLaVA-NeXT-Video-SFT](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Video-SFT-Data), based on Qwen2 language model with a context window of 32K tokens.
|
131 |
|
132 |
- **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
|
133 |
+
- **Point of Contact:** [Yuanhan Zhang](https://zhangyuanhan-ai.github.io/)
|
134 |
- **Languages:** English, Chinese
|
135 |
|
136 |
|