Can you provide the 256x256 version?

#1
by RemiduChalard - opened

hello,
Thank you to provide the model, the inference time is quite high. On 256x256 the inference time should be more acceptable for real time use.
It could be wonderful if you provide it.

Best regards,

Hi @RemiduChalard ,

Our sample inference time is on 518x518 input size.

I just tried this with ONNX on X Elite with 256x256 input, and it takes 17ms, which is real-time (about 60 FPS).

Try it yourself:
pip install "qai_hub_models[depth_anything_v2]"
python -m qai_hub_models.models.depth_anything_v2.export --height 256 --width 256 --ckpt depth-anything/Depth-Anything-V2-Small-hf --device "Snapdragon X Elite CRD" --target-runtime onnx

korywat changed discussion status to closed

Sign up or log in to comment