llama_ft

This model is a fine-tuned version of Llama-2-7B-bf16-sharded on a grocery cart dataset.

Intended uses & limitations

The model helps to tell to what type of grocery does the following items belong to.

Training procedure

Fine tuning techniques like Qlora and PEFT have been used to train the model on the dataset on a single gpu , and the adapters are then finally merged with the model.

load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16

The loading configurations of the model

Training hyperparameters

The following are the LORA configs-->

lora_alpha = 16
lora_dropout = 0.1
lora_r = 64

peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
target_modules=["q_proj","v_proj"]
)

The following are the training configs -->

per_device_train_batch_size = 4
gradient_accumulation_steps = 4
optim = "paged_adamw_32bit"
save_steps = 10
logging_steps = 1
learning_rate = 2e-4
max_grad_norm = 0.3
max_steps = 120
warmup_ratio = 0.03
lr_scheduler_type = "constant"
Downloads last month
5
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.