File size: 1,901 Bytes
d5be57f
 
6a5f217
 
 
 
 
 
 
 
 
 
 
 
 
 
85e57a3
 
 
6a5f217
 
 
 
 
 
 
 
 
 
e03a2b9
 
 
 
 
 
 
6a5f217
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
library_name: transformers
pipeline_tag: image-text-to-text
---


Ferret-UI is the first UI-centric multimodal large language model (MLLM) designed for referring, grounding, and reasoning tasks.
Built on Gemma-2B and Llama-3-8B, it is capable of executing complex UI tasks.
This is the **Llama-3-8B** version of ferret-ui. It follows from [this paper](https://arxiv.org/pdf/2404.05719) by Apple.


## How to Use 🤗📱

You will need first to download `builder.py`, `conversation.py`, and `inference.py` locally.

```bash
wget https://huggingface.co/jadechoghari/Ferret-UI-Llama8b/raw/main/conversation.py
wget https://huggingface.co/jadechoghari/Ferret-UI-Llama8b/raw/main/builder.py
wget https://huggingface.co/jadechoghari/Ferret-UI-Llama8b/raw/main/inference.py
```

### Usage:
```python
from inference import infer_ui_task
# Pass an image and the online model path
image_path = 'image.jpg'
model_path = 'jadechoghari/Ferret-UI-Llama8b'
```

### Task not requiring bounding box
Choose a task from ['widget_listing', 'find_text', 'find_icons', 'find_widget', 'conversation_interaction']
```python
task = 'conversation_interaction'
result = infer_ui_task(image_path, "How do I navigate to the Games tab?", model_path, task)
print("Result:", result)
```
### Task requiring bounding box
Choose a task from ['widgetcaptions', 'taperception', 'ocr', 'icon_recognition', 'widget_classification', 'example_0']
```python
task = 'widgetcaptions' 
region = (50, 50, 200, 200)
result = infer_ui_task(image_path, "Describe the contents of the box.", model_path, task, region=region)
print("Result:", result)
```

### Task with no image processing
Choose a task from ['screen2words', 'detailed_description', 'conversation_perception', 'gpt4']
```python
task = 'detailed_description'
result = infer_ui_task(image_path, "Please describe the screen in detail.", model_path, task)
print("Result:", result)
```