Error: Could not locate file: "https://huggingface.co/onnx-community/Qwen2.5-0.5B-Instruct/resolve/main/onnx/decoder_model_merged_quantized.onnx".

#1
by hamidprogrammeur - opened

Hello I am getting (Error: Could not locate file: "https://huggingface.co/onnx-community/Qwen2.5-0.5B-Instruct/resolve/main/onnx/decoder_model_merged_quantized.onnx".)

when using onnx-community/Qwen2.5-0.5B-Instruct text-generation

im getting the same error

did you figure out a solution? @hamidprogrammeur

does the model name need to be change in this repo?

perhaps it needs to be decoder_model_merged_quantized.onnx?

seems like the other model follow this naming convention. I tried it locally but still ran into issue but perhaps it fixes the URL issue

@secret-sauce yes I figured that as well, I tried to download the model locally and renamed it to decoder_model_merged_quantized.onnx but then I started getting differenet error
"Something went wrong during model construction (most likely a missing operation). Using wasm as a fallback. " so I switched to an other model

@hamidprogrammeur which model did you switch to?

i tried the new llama 3.2 1B and still ran into the error of not locating the file :(

you can use any models that start with @xenova there are working fine
these are working
Xenova/stablelm-2-zephyr-1_6b
Xenova/TinyLlama-1.1B-Chat-v1.0

thanks. hmm... those dont seem to produce the results i'm looking for.

try this one Xenova/Qwen1.5-0.5B-Chat

thx. i've tried it but it randomly outputs chinese charters even after refining the prompt

i'd love to see llama3.2 1B if possible but i'm getting errors still

ONNX Community org

Sorry for the late reply. The reason you are facing this issue is that this is the new format for Transformers.js v3. You can install it with:

npm i @huggingface/transformers

(instead of @xenova /transformers)

Xenova changed discussion status to closed

Sign up or log in to comment