Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,8 @@ The model only takes images as document-side inputs and produce vectors represen
|
|
19 |
|
20 |
# News
|
21 |
|
|
|
|
|
22 |
- 2024-06-27: π We released our first visual embedding model checkpoint minicpm-visual-embedding-v0 on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
23 |
|
24 |
- 2024-05-08: π We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.
|
|
|
19 |
|
20 |
# News
|
21 |
|
22 |
+
- 2024-07-13: We released a command-line based demo for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
|
23 |
+
|
24 |
- 2024-06-27: π We released our first visual embedding model checkpoint minicpm-visual-embedding-v0 on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
25 |
|
26 |
- 2024-05-08: π We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.
|