Model Description

Ui-Tars-7B-Instruct-Finetuned-Os-Atlas is a GUI grounding model finetuned from UI-TARS-7B-DPO.

This model used the OS-Copilot dataset for fine-tuning: OS-Copilot.

Evaluation Results

We evaluated our model using Screenspot on two benchmarks: Screenspot Pro and Screenspot v2.

We also include evaluation scripts used on these benchmarks. The table below compares our model's performance against the base model performance.

Model size Screenspot Pro Screenspot v2
UI-TARS-7B-DPO 7B 27.0 83.0
Ours
Ui-Tars-7B-Instruct-Finetuned-Os-Atlas 7B 33.0 91.8

Note - The base model scores slightly lower than the mentioned scores in the paper because the prompts used for evaluation are not publicly available. We used the default prompts when evaluating the base and fine-tuned models.

Training procedure

Visualize in Weights & Biases

This model was trained with SFT and LoRA.

Evaluation Scripts:

Evaluation scripts available here - Screenspot_Ui-Tars

Quick Start

from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas", 
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
    device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas")
# Example input
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "path/to/image.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Citation

Downloads last month
32
Safetensors
Model size
8.29B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas

Finetuned
(1)
this model

Dataset used to train Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas