Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: apache-2.0
|
|
13 |
|
14 |
# Memex: OCR-free Visual Document Embedding Model as Your Personal Librarian
|
15 |
|
16 |
-
The model only takes images as document-side inputs and produce vectors representing document pages.
|
17 |
|
18 |
Our model is capable of:
|
19 |
|
@@ -29,11 +29,11 @@ Our model is capable of:
|
|
29 |
|
30 |
- 2024-07-14: π€ We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
|
31 |
|
32 |
-
- 2024-07-14: π We released a **locally deployable Gradio demo** of `
|
33 |
|
34 |
-
- 2024-07-13: π» We released a **locally deployable command-line based demo** of `
|
35 |
|
36 |
-
- 2024-06-27: π We released our first visual embedding model checkpoint
|
37 |
|
38 |
- 2024-05-08: π We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.
|
39 |
|
@@ -161,7 +161,7 @@ If you find our work useful, please consider cite us:
|
|
161 |
```bibtex
|
162 |
@misc{RhapsodyEmbedding2024,
|
163 |
author = {RhapsodyAI},
|
164 |
-
title = {OCR-free Visual Document Embedding Model as Your Personal Librarian},
|
165 |
year = {2024},
|
166 |
howpublished = {\url{https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0}},
|
167 |
note = {Accessed: 2024-06-28}
|
|
|
13 |
|
14 |
# Memex: OCR-free Visual Document Embedding Model as Your Personal Librarian
|
15 |
|
16 |
+
The model only takes images as document-side inputs and produce vectors representing document pages. Memex is trained with over 200k query-visual document pairs, including textual document, visual document, arxiv figures, plots, charts, industry documents, textbooks, ebooks, and openly-available PDFs, etc. Its performance is on a par with our ablation text embedding model on text-oriented documents, and an advantages on visually-intensive documents.
|
17 |
|
18 |
Our model is capable of:
|
19 |
|
|
|
29 |
|
30 |
- 2024-07-14: π€ We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
|
31 |
|
32 |
+
- 2024-07-14: π We released a **locally deployable Gradio demo** of `Memex`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can run `pipeline_gradio.py` to build a demo on your PC.
|
33 |
|
34 |
+
- 2024-07-13: π» We released a **locally deployable command-line based demo** of `Memex` for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
|
35 |
|
36 |
+
- 2024-06-27: π We released our first visual embedding model checkpoint on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
37 |
|
38 |
- 2024-05-08: π We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.
|
39 |
|
|
|
161 |
```bibtex
|
162 |
@misc{RhapsodyEmbedding2024,
|
163 |
author = {RhapsodyAI},
|
164 |
+
title = {Memex: OCR-free Visual Document Embedding Model as Your Personal Librarian},
|
165 |
year = {2024},
|
166 |
howpublished = {\url{https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0}},
|
167 |
note = {Accessed: 2024-06-28}
|