added the sample code

#1
by Anash - opened
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -2,4 +2,31 @@
2
  license: apache-2.0
3
  ---
4
 
5
- A tiny randomly-initialized [ViLT](https://arxiv.org/abs/2102.03334) used for unit tests in the Transformers VQA pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  ---
4
 
5
+ A tiny randomly-initialized [ViLT](https://arxiv.org/abs/2102.03334) used for unit tests in the Transformers VQA pipeline
6
+
7
+ ### How to use
8
+
9
+ Here is how to use this model in PyTorch:
10
+
11
+ ```python
12
+ from transformers import ViltProcessor, ViltForQuestionAnswering
13
+ import requests
14
+ from PIL import Image
15
+
16
+ # prepare image + question
17
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
18
+ image = Image.open(requests.get(url, stream=True).raw)
19
+ text = "How many cats are there?"
20
+
21
+ processor = ViltProcessor.from_pretrained("hf-internal-testing/tiny-vilt-random-vqa")
22
+ model = ViltForQuestionAnswering.from_pretrained("hf-internal-testing/tiny-vilt-random-vqa")
23
+
24
+ # prepare inputs
25
+ encoding = processor(image, text, return_tensors="pt")
26
+
27
+ # forward pass
28
+ outputs = model(**encoding)
29
+ logits = outputs.logits
30
+ idx = logits.argmax(-1).item()
31
+ print("Predicted answer:", model.config.id2label[idx])
32
+ ```