bokesyo commited on
Commit
c26facf
1 Parent(s): d917af7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -19,7 +19,7 @@ The model only takes images as document-side inputs and produce vectors represen
19
 
20
  # News
21
 
22
- - 2024-07-14: We released a Gradio demo of `miniCPM-visual-embedding-v0`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). We consider hosting a huggingface space to deploy this.
23
 
24
  - 2024-07-13: We released a command-line based demo of `miniCPM-visual-embedding-v0` for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
25
 
@@ -92,6 +92,14 @@ print(scores)
92
  # tensor([[-0.0112, 0.3316, 0.2376]], device='cuda:0')
93
  ```
94
 
 
 
 
 
 
 
 
 
95
  # Limitations
96
 
97
  - This checkpoint is an alpha version, and may not be strong in your tasks, for bad case, please create an issue to let us know, many thanks!
@@ -100,7 +108,6 @@ print(scores)
100
 
101
  - The inference speed is low, because vision encoder uses `timm`, which does not yet support `flash-attn`.
102
 
103
-
104
  # Citation
105
 
106
  If you find our work useful, please consider cite us:
 
19
 
20
  # News
21
 
22
+ - 2024-07-14: We released a Gradio demo of `miniCPM-visual-embedding-v0`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can run `pipeline_gradio.py` to build a demo on your PC.
23
 
24
  - 2024-07-13: We released a command-line based demo of `miniCPM-visual-embedding-v0` for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
25
 
 
92
  # tensor([[-0.0112, 0.3316, 0.2376]], device='cuda:0')
93
  ```
94
 
95
+ # Todos
96
+
97
+ - Release huggingface space demo.
98
+
99
+ - Release the evaluation results.
100
+
101
+ - Release technical report.
102
+
103
  # Limitations
104
 
105
  - This checkpoint is an alpha version, and may not be strong in your tasks, for bad case, please create an issue to let us know, many thanks!
 
108
 
109
  - The inference speed is low, because vision encoder uses `timm`, which does not yet support `flash-attn`.
110
 
 
111
  # Citation
112
 
113
  If you find our work useful, please consider cite us: