nielsr HF staff commited on
Commit
b92b6fd
1 Parent(s): e09382f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ widget:
16
 
17
  # LeViT
18
 
19
- LeViT128S model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
20
  ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
21
 
22
  Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
@@ -33,8 +33,8 @@ import requests
33
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
34
  image = Image.open(requests.get(url, stream=True).raw)
35
 
36
- feature_extractor = LevitFeatureExtractor.from_pretrained('anugunj/levit-192')
37
- model = LevitForImageClassificationWithTeacher.from_pretrained('anugunj/levit-192')
38
 
39
  inputs = feature_extractor(images=image, return_tensors="pt")
40
  outputs = model(**inputs)
 
16
 
17
  # LeViT
18
 
19
+ LeViT-192 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
20
  ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
21
 
22
  Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
 
33
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
34
  image = Image.open(requests.get(url, stream=True).raw)
35
 
36
+ feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-192')
37
+ model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-192')
38
 
39
  inputs = feature_extractor(images=image, return_tensors="pt")
40
  outputs = model(**inputs)