Why do various companies keep using hard-coded system prompt in the chat template?
2
#17 opened 7 months ago
by
pseudotensor
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6308791ac038bf42d568153f/z9TovAddXU3OQR9N_2KFP.jpeg)
how do i erase this after downloading it locally??
#16 opened 7 months ago
by
malihos
[AUTOMATED] Model Memory Requirements
#15 opened 9 months ago
by
model-sizer-bot
The Model Stop Engaging in conversation
2
#14 opened 9 months ago
by
Albihany
generation_config.json adds a mapping with the special token '<|im_end|>' to solve the problem of non-stop generation when <|im_end|> is encountered.
#13 opened 9 months ago
by
zjyhf
The tokenizer adds a special token '<|im_end|>' to solve the problem of non-stop generation when encountering <|im_end|>.
#12 opened 9 months ago
by
zjyhf
About tokens used in this model.
1
#8 opened 9 months ago
by
icoicqico
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/rdNntusmc7gSQRCOlqskQ.jpeg)
Multi-lang?
1
#6 opened 9 months ago
by
DalyD
Upload to ollama
#5 opened 9 months ago
by
nonetrix
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6342619a9948f573f37a4a60/Yc5CmyeuXpTPpKKT6xMqd.png)
Adding `safetensors` variant of this model
#4 opened 9 months ago
by
lucataco
![](https://cdn-avatars.huggingface.co/v1/production/uploads/629987bb448458f5f0c45860/hxN4vpBvcNfCccKzgryht.png)
🚩 Report: Legal issue(s)
3
#3 opened 9 months ago
by
deleted
Should be "Llama 3ChatQA-1.5-70B"
3
#2 opened 9 months ago
by
just1moremodel
Concerns regarding Prompt Format
6
#1 opened 9 months ago
by
wolfram
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6303ca537373aacccd85d8a7/JZqLjXZVGWXJdWUNI99db.jpeg)