Using Arch-Function-Chat with Llama.cpp
Hi,
Could you please tell me what is the proper way of using this model with llama.cpp?
The chat template defined in the tokenizer_config.json
is the default qwen-2.5
chat template and this is the chat template that is used with the GGUF files loaded by llama.cpp. However in the example in the model card, it seems the system prompt is formatted differently from what is standard to qwen-2.5
. So, should I use the default chat template generated by llama.cpp (based on the qwen-2.5
chat template) or I have to use the format in the model card? Since if I pass the tool descriptions to the chat completion api of llama.cpp, it would format them using the chat template which is different from what is used in the model card.
Also could you please tell me how multi-turn conversations should be constructed? I want to use the model in a conversation scenario where the tool responses should be sent back to the model to generate proper responses. As I see in the qwen
chat template, there is a role called tool
which we have to use to specify the tool responses, is it also the way the this model works?
For llama.cpp, you don't have to change anything, simply use the same chat template. If you want to use this model for function calling, make sure you include a system message in the format we provided (see usage for more detail). To construct multi-turn conversations, you can simply use that chat template as well. Nothing needs to be changed. The role tool
will work.