Image-Text-to-Text
Safetensors
llava_llama
BoyuNLP commited on
Commit
db1831e
·
verified ·
1 Parent(s): 75d7800

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,7 +5,7 @@ pipeline_tag: image-text-to-text
5
 
6
  # UGround (The Initial LLaVA-based Version)
7
 
8
- **Update: We have trained [stronger model](https://huggingface.co/osunlp/UGround-V1-7B) based on Qwen2-VL with the same data. We suggest using them instead for better performance and more convenient training, inference and deployment.**
9
 
10
  UGround is a strong GUI visual grounding model trained with a simple recipe. Check our homepage and paper for more details. This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).
11
  ![radar](https://osu-nlp-group.github.io/UGround/static/images/radar.png)
 
5
 
6
  # UGround (The Initial LLaVA-based Version)
7
 
8
+ **Update: We have trained [stronger models](https://huggingface.co/osunlp/UGround-V1-7B) based on Qwen2-VL with the same data. We suggest using them instead for better performance and more convenient training, inference and deployment.**
9
 
10
  UGround is a strong GUI visual grounding model trained with a simple recipe. Check our homepage and paper for more details. This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).
11
  ![radar](https://osu-nlp-group.github.io/UGround/static/images/radar.png)