jadechoghari commited on
Commit
6a5f217
1 Parent(s): d554d74

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  library_name: transformers
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: image-text-to-text
4
+ ---
5
+
6
+
7
+ Ferret-UI is the first UI-centric multimodal large language model (MLLM) designed for referring, grounding, and reasoning tasks.
8
+ Built on Gemma-2B and Llama-3-8B, it is capable of executing complex UI tasks.
9
+ This is the **Llama-3-8B** version of ferret-ui. It follows from [this paper](https://arxiv.org/pdf/2404.05719) by Apple.
10
+
11
+
12
+ ## How to Use 🤗📱
13
+
14
+ You will need first to download `builder.py`, `conversation.py`, and `inference.py` locally.
15
+
16
+ ```bash
17
+ wget https://huggingface.co/jadechoghari/ferret-gemma/raw/main/conversation.py
18
+ wget https://huggingface.co/jadechoghari/ferret-gemma/raw/main/builder.py
19
+ wget https://huggingface.co/jadechoghari/ferret-gemma/raw/main/inference.py
20
+ ```
21
+
22
+ ### Usage:
23
+ ```python
24
+ from inference import infer_ui_task
25
+ # Pass an image and the online model path
26
+ image_path = 'image.jpg'
27
+ model_path = 'jadechoghari/Ferret-UI-Llama8b'
28
+ ```
29
+
30
+ ### Task requiring bounding box
31
+ Choose a task from ['widgetcaptions', 'taperception', 'ocr', 'icon_recognition', 'widget_classification', 'example_0']
32
+ ```python
33
+ task = 'widgetcaptions'
34
+ region = (50, 50, 200, 200)
35
+ result = infer_ui_task(image_path, "Describe the contents of the box.", model_path, task, region=region)
36
+ print("Result:", result)
37
+ ```
38
+
39
+ ### Task not requiring bounding box
40
+ Choose a task from ['widget_listing', 'find_text', 'find_icons', 'find_widget', 'conversation_interaction']
41
+ ```python
42
+ task = 'conversation_interaction'
43
+ result = infer_ui_task(image_path, "How do I navigate to the Games tab?", model_path, task)
44
+ print("Result:", result)
45
+ ```
46
+
47
+ ### Task with no image processing
48
+ Choose a task from ['screen2words', 'detailed_description', 'conversation_perception', 'gpt4']
49
+ ```python
50
+ task = 'detailed_description'
51
+ result = infer_ui_task(image_path, "Please describe the screen in detail.", model_path, task)
52
+ print("Result:", result)
53
+ ```