How to use python local visual question answering
#15 opened about 1 month ago
by
mint20262026
Help to run the model locally
#14 opened 2 months ago
by
Salzani

Interview request: Thoughts on genAI evaluation & documentation
#13 opened 2 months ago
by
evatang
Regarding Model Weights
1
#12 opened 3 months ago
by
BimsaraRad
Run omnivision on Nvidia Jetson-Orin
1
#11 opened 3 months ago
by
ravindutbandara
9x token reduction
1
#10 opened 3 months ago
by
Sijuade
Error loading model
2
#9 opened 3 months ago
by
iojvsuynv
nexa-on-colab
1
#8 opened 3 months ago
by
sdyy
Compare with llava-onevision-894M and internvl2-938M?
3
#7 opened 3 months ago
by
nemonameless
Video or multiple frames.
1
#6 opened 3 months ago
by
monamie

transformers version?
1
#5 opened 3 months ago
by
CHNtentes
How to call it through transformer
2
#4 opened 3 months ago
by
awelker
Text/vision parameter split
1
#3 opened 3 months ago
by
AlexThompson

How do you encode an image in only 81 tokens?
5
#2 opened 3 months ago
by
ChristineLai
about ocr
1
#1 opened 3 months ago
by
MiaHawthorne
