bokesyo's picture
Update README.md
3081d81 verified
|
raw
history blame
3.09 kB
metadata
language:
  - en
tags:
  - information retrieval
  - embedding model
  - visual information retrieval

MiniCPM-Visual-Embedding: An OCR-free Visual-Based Document Embedding Model Based on MiniCPM-V-2.0 as Your Personal Librarian

With MiniCPM-Visual-Embedding, it is possible to directly build knowledge base with raw PDF/Book/Document without any OCR technique nor OCR pipeline. The model only takes images as document-side inputs and produce vectors representing document pages.

Github Repo

Memex Archtechture

News

  • 2024-06-27: We released our first visual embedding model minicpm-visual-embedding-v0.1 on huggingface.

  • 2024-05-08: We committed our training code (full-parameter tuning with GradCache and DeepSpeed, supports large batch size across multiple GPUs with zero-stage1) and eval code.

Get started

Pip install all dependencies:

Pillow==10.1.0
timm==0.9.10
torch==2.1.2
torchvision==0.16.2
transformers==4.36.0
sentencepiece==0.1.99

First you are suggested to git clone this huggingface repo or download repo with huggingface_cli.

git lfs install
git clone https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0

or

huggingface-cli download RhapsodyAI/minicpm-visual-embedding-v0
from transformers import AutoModel
from transformers import AutoTokenizer
from PIL import Image
import torch

device = 'cuda:0'

def last_token_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]


tokenizer = AutoTokenizer.from_pretrained('/local/path/to/minicpm-visual-embedding-v0', trust_remote_code=True)
model = AutoModel.from_pretrained('/local/path/to/minicpm-visual-embedding-v0', trust_remote_code=True)

image_1 = Image.open('/local/path/to/document1.png').convert('RGB')
image_2 = Image.open('/local/path/to/document2.png').convert('RGB')

query_instruction = 'Represent this query for retrieving relavant document: '

query = 'Who was elected as president of United States in 2020?'

query_full = query_instruction + query

# Embed text queries
q_outputs = model(text=[query_full], image=[None, None], tokenizer=tokenizer) # [B, s, d]
q_reps = last_token_pool(q_outputs.last_hidden_state, q_outputs.attention_mask) # [B, d]

# Embed image documents
p_outputs = model(text=['', ''], image=[image_1, image_2], tokenizer=tokenizer) # [B, s, d]
p_reps = last_token_pool(p_outputs.last_hidden_state, p_outputs.attention_mask) # [B, d]

# Calculate similarities
scores = torch.matmul(q_reps, p_reps)

print(scores)