--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- ## Model Details ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_4bit: True - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True ### Framework versions - PEFT 0.6.3.dev0