CogVLM models are supported by LMDeploy for inference

#11
by RunningLeon - opened

Hi, CogVLM are powerful multi-modal models and they are supported by LMDeploy:
Those who are interested in deploying CogVLM, could try this out by referring to this doc. BTW, below is an example of showing how simple it is to do offline inference using lmdeploy's pipeline.

from lmdeploy import pipeline
from lmdeploy.vl import load_image

pipe = pipeline('THUDM/cogvlm2-llama3-chat-19B')

image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe(('describe this image', image))
print(response)

cool

Sign up or log in to comment