--- license: apache-2.0 --- A tiny randomly-initialized [ViLT](https://arxiv.org/abs/2102.03334) used for unit tests in the Transformers VQA pipeline ### How to use Here is how to use this model in PyTorch: ```python from transformers import ViltProcessor, ViltForQuestionAnswering import requests from PIL import Image # prepare image + question url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "How many cats are there?" processor = ViltProcessor.from_pretrained("hf-internal-testing/tiny-vilt-random-vqa") model = ViltForQuestionAnswering.from_pretrained("hf-internal-testing/tiny-vilt-random-vqa") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ```