Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • _load_in_8bit: False
  • _load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: float16
  • bnb_4bit_quant_storage: uint8
  • load_in_4bit: True
  • load_in_8bit: False

Framework versions

  • PEFT 0.5.0

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the ๐Ÿš€ Open Portuguese LLM Leaderboard

Metric Value
Average 33.68
ENEM Challenge (No Images) 31.21
BLUEX (No Images) 26.01
OAB Exams 26.20
Assin2 RTE 40.52
Assin2 STS 4.64
FaQuAD NLI 32.15
HateBR Binary 60.10
PT Hate Speech Binary 54.14
tweetSentBR 28.18
Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this modelโ€™s pipeline type.

Space using recogna-nlp/qwenbode_1_8b_chat_ultraalpaca_qlora 1

Evaluation results