File size: 1,737 Bytes
b63f8c0 6999cf8 b63f8c0 e9e8f71 b63f8c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
base_model: unsloth/Meta-Llama-3.2-1B-Instruct
language:
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- q4_k_m
- 4bit
- sharegpt
- pretaining
- finetuning
- Q5_K_M
- Q8_0
- uss
- Perú
- Lambayeque
- Chiclayo
datasets:
- ussipan/sipangpt
pipeline_tag: text2text-generation
new_version: ussipan/SipanGPT-0.3-Llama-3.2-1B-GGUF
---
# SipánGPT 0.2 Llama 3.2 1B GGUF
- Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú.
- Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru.
## Testing the model
![image/png](https://cdn-uploads.huggingface.co/production/uploads/644474219174daa2f6919d31/N05EuzTSicz8586lX7MaF.png)
- Debido a la cantidad de conversaciones con las que fue entrenado (5400 conversaciones), el modelo genera bastantes alucinaciones.
- Due to the number of conversations with which it was trained (5400 conversations), the model generates quite a few hallucinations.
# Uploaded model
- **Developed by:** jhangmez
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
---
## SipánGPT 0.2 Llama 3.2 1B GGUF
<div style="display: flex; align-items: center; height: fit-content;">
<img src="https://avatars.githubusercontent.com/u/60937214?v=4" width="40" style="margin-right: 10px;"/>
<span>Hecho con ❤️ por Jhan Gómez P.</span>
</div> |