|
--- |
|
license: mit |
|
language: |
|
- en |
|
datasets: |
|
- TRAC-MTRY/traclm-v3-data |
|
- Open-Orca/SlimOrca-Dedup |
|
--- |
|
# Model Card for traclm-v3-7b-instruct-GGUF |
|
|
|
This repo contains an GGUF quantizations of [TRAC-MTRY/traclm-v3-7b-instruct](https://huggingface.co/TRAC-MTRY/traclm-v3-7b-instruct) for utilization of the model on low-resource hardware. |
|
|
|
Read more about GGUF quantization [here](https://github.com/ggerganov/llama.cpp). |
|
|
|
Read more about the unquantized model [here](https://huggingface.co/TRAC-MTRY/traclm-v3-7b-instruct). |
|
|
|
## Prompt Format |
|
|
|
This model was fine-tuned with the chatml prompt format. It is *highly* recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly. |
|
|
|
ChatML Format: |
|
``` |
|
<|im_start|>system |
|
Provide some context and/or instructions to the model. |
|
<|im_end|> |
|
<|im_start|>user |
|
The user’s message goes here |
|
<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
The ChatML format can easily be applied to text you plan to process with the model using the `chat_template` included in the tokenizer. Read [here](https://huggingface.co/docs/transformers/main/en/chat_templating) for additional information. |
|
|
|
## Model Card Contact |
|
|
|
MAJ Daniel C. Ruiz ([email protected]) |