Junk Responses from Local model
When I try to run the commands in the model card I run into errors shown below.
File "/root/.cache/huggingface/modules/transformers_modules/MiniCPM2p6/configuration_minicpm.py", line 9, in
from .modeling_navit_siglip import SiglipVisionConfig
ModuleNotFoundError: No module named 'transformers_modules.MiniCPM2p6'
I tried downloading to fix the problems by changing the relative locations to absolute in several py scripts. After some fixes the model command seems to work and load the weights into memory but I see all junk response for every question.
msgs = [{'role': 'user', 'content': 'Identify yourself'}]
res = model.chat(image=None,msgs=msgs,tokenizer=tokenizer)
res
'_\nοΌ a\n,,,\n\n\tβ'
================= Full diary ====================
(myenv) root@Spurge:~# python3
Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
from transformers import AutoModel, AutoTokenizer
import torch
from PIL import Image
sys.path.append("/root/MiniCPM-V-2_6/")
Traceback (most recent call last):
File "", line 1, in
NameError: name 'sys' is not defined. Did you forget to import 'sys'?
import sys
sys.path.append("/root/MiniCPM-V-2_6/")
sys.path.append("/root/.cache/huggingface/modules/transformers_modules/MiniCPM-V-2_6/")
tokenizer = AutoTokenizer.from_pretrained("MiniCPM-V-2_6", trust_remote_code=True)
model = AutoModel.from_pretrained("MiniCPM-V-2_6", trust_remote_code=True, attn_implementation='flash_attention_2', torch_dtype=torch.bfloat16)
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU withmodel.to('cuda')
.
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 10.55it/s]
model = model.eval().cuda()
question = 'Do OCR on the image and tersely return the text in the image'
image = Image.open('/root/tsim2.png').convert('RGB')
msgs = [{'role': 'user', 'content': [image, question]}]
res = model.chat(image=image,msgs=msgs,tokenizer=tokenizer)
/root/myenv/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py:520: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please useslow_image_processor_class
, orfast_image_processor_class
instead
warnings.warn(
res
'β\tοΌ\n\n\n\n,\nοΌ``\n\n\n\n\nοΌ\n the'
Any idea what's going on here? My environment is fine, I'm able to make other models work.
Thanks