--- library_name: transformers datasets: - ucsahin/TR-VLM-DPO-Dataset language: - tr pipeline_tag: image-text-to-text license: apache-2.0 base_model: ucsahin/TraVisionLM-base ---
logo
## This is the DPO optimized version of the base model [TraVisionLM-base](https://huggingface.co/ucsahin/TraVisionLM-base). When compared to the base model, the DPO version should answer questions more accurately, truthfully, and in more details. ### You can check out the model at: [TRaVisionLM-DPO-Demo](https://huggingface.co/spaces/ucsahin/TraVisionLM-Demo) ### Visual Language Model DPO Training: [Colab Notebook](https://colab.research.google.com/drive/1ypEPQ3RBX3_X7m9qfmU-Op-vGgOjab_z?usp=sharing) ### Model Description - **Developed by:** [ucsahin](https://huggingface.co/ucsahin) - **Model type:** [Image-Text-to-Text](https://huggingface.co/tasks/image-text-to-text) - **Language(s) (NLP):** *Turkish* - **License:** *Apache license 2.0* - --- ## English # 🎉 Introducing TraVisionLM: The First of Its Kind! 🚀 🌟 This is a very fast and small (only 875M parameters) visual language model on Hugging Face that responds to Turkish instructions given an image input! 🌟 ✨ Developed compatible with the Transformers library, TRaVisionLM is a breeze to load, fine-tune, and use for lightning-fast inferences—all without needing any external libraries! ⚡️ Ready to experience the Turkish visual language model? Let's go! 🇹🇷🖼️🤖 ## Türkçe # 🎉 TraVisionLM: Türünün İlk Örneği! 🚀 🌟 Çok hızlı ve küçük boyutlu (sadece 875M parametre) Türkçe görsel dil modeli! Bir görüntü ve Türkçe talimat verildiğinde Türkçe yanıt üretir! 🌟 ✨ Transformers kütüphanesi ile uyumlu olarak geliştirilen TraVisionLM modeli ile, yükleme, eğitme ve dış kütüphaneler kullanmadan hızlı sonuçlar almak çok kolay! ⚡️ Türkçe görsel dil modelini deneyimlemeye hazır mısınız? Hadi başlayalım! 🇹🇷🖼️🤖 --- ## How to Get Started with the Model In Transformers, you can load the model and inference as follows: **IMPORTANT NOTE:** TraVisionLM model is not yet integrated natively into the Transformers library. So you need to set ```trust_remote_code=True``` when loading the model. It will download the ```configuration_travisionlm.py```, ```modeling_travisionlm.py``` and ```processing_travisionlm.py``` files from the repo. You can check out the content of these files under the *Files and Versions* tab and pin the specific versions if you have any concerns regarding malicious code. ```python from transformers import AutoModelForCausalLM, AutoProcessor import torch import requests from PIL import Image model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, device_map="cuda") # you can also load the model in bfloat16 or float16 # model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda") processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "Açıkla" # short caption # prompt = "Detaylı açıkla" # detailed caption # prompt = "Araba ne renktir?" # visual qa # prompt = "Resmin odak noktası nedir?" # visual qa # prompt = "Araba nerede duruyor?" # visual qa inputs = processor(text=prompt, images=image, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2) output_text = processor.batch_decode(outputs, skip_special_tokens=True)[0] print("Model response: ", output_text) ``` You can also perform batch inference as follows (make sure that all images have a prompt text associated with them): ```python from transformers import AutoModelForCausalLM, AutoProcessor import torch import requests from PIL import Image model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True, device_map="cuda") # you can also load the model in bfloat16 or float16 # model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda") processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt_list = [ 'Açıkla', 'Detaylı açıkla', 'Araba nerede duruyor?', 'Arabanın rengi nedir?', ] inputs = processor(text=prompt_list, images=len(prompt_list)*[image], padding="longest", return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2) output_text_list = processor.batch_decode(outputs, skip_special_tokens=True) for output_text in output_text_list: print(f"Model response: {output_text}\n\n\n") ``` The output will look like this: ``` """ Model response: Açıkla Bir binanın önünde, sokakta park halindeki mavi bir Volkswagen Beetle. Model response: Detaylı açıkla Bu görüntüde, bir taş döşeli sokakta park edilmiş yeşil ve mavi bir Volkswagen Beetle bulunmaktadır. Arka planda iki sarı bina vardır. Araba kameraya doğru bakmaktadır. Görüntü net odaklanmıştır ve renkler canlıdır. Görsel tarzı gerçekçidir. Model response: Araba nerede duruyor? Araba, sarı bir binanın yanında sokakta park edilmiş. Model response: Arabanın rengi nedir? Araba turkuaz veya limon yeşili renktedir. """ ``` ---