qaihm-bot commited on
Commit
2987281
1 Parent(s): 5ff6e19

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -11
README.md CHANGED
@@ -36,7 +36,7 @@ More details on model performance across various devices, can be found
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 50.305 ms | 0 - 3 MB | FP16 | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite)
40
 
41
 
42
  ## Installation
@@ -93,16 +93,6 @@ device. This script does the following:
93
  python -m qai_hub_models.models.swin_small.export
94
  ```
95
 
96
- ```
97
- Profile Job summary of Swin-Small
98
- --------------------------------------------------
99
- Device: Samsung Galaxy S24 (14)
100
- Estimated Inference Time: 32.98 ms
101
- Estimated Peak Memory Range: 0.04-457.50 MB
102
- Compute Units: NPU (1609) | Total (1609)
103
-
104
-
105
- ```
106
  ## How does this work?
107
 
108
  This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/Swin-Small/export.py)
 
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 46.059 ms | 0 - 8 MB | FP16 | NPU | [Swin-Small.tflite](https://huggingface.co/qualcomm/Swin-Small/blob/main/Swin-Small.tflite)
40
 
41
 
42
  ## Installation
 
93
  python -m qai_hub_models.models.swin_small.export
94
  ```
95
 
 
 
 
 
 
 
 
 
 
 
96
  ## How does this work?
97
 
98
  This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/Swin-Small/export.py)