Uploaded model

  • Developed by: Agnuxo(https://github.com/Agnuxo1)
  • License: apache-2.0
  • Finetuned from model : Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Benchmark Results

This model has been fine-tuned for various tasks and evaluated on the following benchmarks:

accuracy

Accuracy: Not Available

accuracy Accuracy

bertscore

Bertscore: Not Available

bertscore Bertscore

glue

Glue: Not Available

glue Glue

perplexity

Perplexity: Not Available

perplexity Perplexity

Model Size: 4,124,864 parameters Required Memory: 0.02 GB

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
0
Safetensors
Model size
4.12M params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support adapter-transformers models for this pipeline type.

Datasets used to train Agnuxo/tiny-llama_Spanish_English_raspberry_pi5_16bit