|
--- |
|
base_model: unsloth/Meta-Llama-3.2-1B-Instruct |
|
language: |
|
- es |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- gguf |
|
- q4_k_m |
|
- 4bit |
|
- sharegpt |
|
- pretaining |
|
- finetuning |
|
- Q5_K_M |
|
- Q8_0 |
|
- uss |
|
- Perú |
|
- Lambayeque |
|
- Chiclayo |
|
datasets: |
|
- ussipan/sipangpt |
|
pipeline_tag: text2text-generation |
|
new_version: ussipan/SipanGPT-0.3-Llama-3.2-1B-GGUF |
|
--- |
|
|
|
# SipánGPT 0.2 Llama 3.2 1B GGUF |
|
- Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú. |
|
- Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru. |
|
|
|
## Testing the model |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/644474219174daa2f6919d31/N05EuzTSicz8586lX7MaF.png) |
|
|
|
- Debido a la cantidad de conversaciones con las que fue entrenado (5400 conversaciones), el modelo genera bastantes alucinaciones. |
|
- Due to the number of conversations with which it was trained (5400 conversations), the model generates quite a few hallucinations. |
|
|
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** jhangmez |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/Meta-Llama-3.2-1B-Instruct |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
--- |
|
|
|
## SipánGPT 0.2 Llama 3.2 1B GGUF |
|
|
|
<div style="display: flex; align-items: center; height: fit-content;"> |
|
<img src="https://avatars.githubusercontent.com/u/60937214?v=4" width="40" style="margin-right: 10px;"/> |
|
<span>Hecho con ❤️ por Jhan Gómez P.</span> |
|
</div> |