Uploaded model

  • Developed by: Agnuxo
  • License: apache-2.0
  • Finetuned from model: Agnuxo/Tinytron-Qwen2-0.5B

This model was fine-tuned using Unsloth and Huggingface's TRL library.

Benchmark Results

This model has been fine-tuned for various tasks and evaluated on the following benchmarks:

GLUE_SST-2

Accuracy: 0.5080

GLUE_SST-2 Metrics

Model Size: 494,034,560 parameters Required Memory: 1.84 GB

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_32bit

Base model

Qwen/Qwen2-0.5B
Adapter
(277)
this model

Datasets used to train Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_32bit