Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,8 @@ Our model is capable of:
|
|
21 |
|
22 |
- Help you build a personal library and retireve book pages from a large collection of books.
|
23 |
|
|
|
|
|
24 |
- It works like human: read and comprehend with **vision** and remember **multimodal** information in hippocampus.
|
25 |
|
26 |
![Memex Archtechture](images/memex.png)
|
@@ -29,13 +31,13 @@ Our model is capable of:
|
|
29 |
|
30 |
- 2024-07-14: π€ We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
|
31 |
|
32 |
-
- 2024-07-14: π We released a **locally deployable Gradio demo** of `Memex`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can
|
33 |
|
34 |
-
- 2024-07-13: π» We released a **locally deployable command-line based demo**
|
35 |
|
36 |
- 2024-06-27: π We released our first visual embedding model checkpoint on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
37 |
|
38 |
-
- 2024-05-08: π We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.
|
39 |
|
40 |
# Deploy on your PC
|
41 |
|
|
|
21 |
|
22 |
- Help you build a personal library and retireve book pages from a large collection of books.
|
23 |
|
24 |
+
- It has only 2.8B parameters, and has the potential to run on your PC.
|
25 |
+
|
26 |
- It works like human: read and comprehend with **vision** and remember **multimodal** information in hippocampus.
|
27 |
|
28 |
![Memex Archtechture](images/memex.png)
|
|
|
31 |
|
32 |
- 2024-07-14: π€ We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/minicpm-visual-embeeding-v0-demo)!
|
33 |
|
34 |
+
- 2024-07-14: π We released a **locally deployable Gradio demo** of `Memex`, take a look at [pipeline_gradio.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline_gradio.py). You can build a demo on your PC now!
|
35 |
|
36 |
+
- 2024-07-13: π» We released a **locally deployable command-line based demo** for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
|
37 |
|
38 |
- 2024-06-27: π We released our first visual embedding model checkpoint on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
|
39 |
|
40 |
+
- 2024-05-08: π We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed zero-stage2, supports large batch size across multiple GPUs with zero-stage1) and eval code.
|
41 |
|
42 |
# Deploy on your PC
|
43 |
|