Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ tags:
|
|
5 |
- video-classification
|
6 |
---
|
7 |
|
8 |
-
# TimeSformer (
|
9 |
|
10 |
TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
|
11 |
|
@@ -24,12 +24,12 @@ from transformers import AutoImageProcessor, TimesformerForVideoClassification
|
|
24 |
import numpy as np
|
25 |
import torch
|
26 |
|
27 |
-
video = list(np.random.randn(
|
28 |
|
29 |
-
processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-
|
30 |
-
model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-
|
31 |
|
32 |
-
inputs =
|
33 |
|
34 |
with torch.no_grad():
|
35 |
outputs = model(**inputs)
|
|
|
5 |
- video-classification
|
6 |
---
|
7 |
|
8 |
+
# TimeSformer (high-resolution variant, fine-tuned on Something Something v2)
|
9 |
|
10 |
TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
|
11 |
|
|
|
24 |
import numpy as np
|
25 |
import torch
|
26 |
|
27 |
+
video = list(np.random.randn(16, 3, 448, 448))
|
28 |
|
29 |
+
processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-hr-finetuned-ssv2")
|
30 |
+
model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-hr-finetuned-ssv2")
|
31 |
|
32 |
+
inputs = feature_extractor(images=video, return_tensors="pt")
|
33 |
|
34 |
with torch.no_grad():
|
35 |
outputs = model(**inputs)
|