About
Adapter train using Qlora made for LLaMa2 7b Chat. This adapter adds the ability to fully, fluently and uninterruptedly speak other languages โโto LLaMa2.
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.5.0.dev0
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for lusstta/LLaMa2-Qlora-AiresAi
Base model
TinyPixel/Llama-2-7B-bf16-sharded