pfnet/Preferred-MedLLM-Qwen-72B
#743
by
tomgm
- opened
Wow what an awesome model! I for sure will try it out. So cool to see a 72B model beating GPT-4o in medical exams.
@mradermacher I had to force add it because the model author put config.json into LFS for no reason and somehow your metdata check hates it beeing in LFS:
pfnet/Preferred-MedLLM-Qwen-72B: no architectures entry (malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "version https://git-...") at /llmjob/share/bin/llmjob line 1562.
It's queued! :D
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Preferred-MedLLM-Qwen-72B-GGUF for quants to appear.
somehow your metdata check hates it beeing in LFS
I suspect it downloads the pointer file. Let me try to fix that.
mradermacher
changed discussion status to
closed
yeah, it was using raw/main. I hope that the huggingface api library resolves it as well (I fall back on the read_text method).