Visualize in Weights & Biases

Mistral-7B-Instruct-v0.3 LoRA

This model is a LoRA fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on bkai-foundation-models/vi-alpaca dataset. It achieves the following results on the evaluation set:

  • eval_loss: 0.4744
  • eval_runtime: 241.8465
  • eval_samples_per_second: 31.016
  • eval_steps_per_second: 3.878
  • epoch: 1.0
  • step: 10627

Usage

# !pip install accelerate bitsandbytes peft
from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
import torch

model_name = "mistralai/Mistral-7B-Instruct-v0.3"
peft_model_id = "date3k2/Mistral-7B-Instruct-vi-alpaca"

bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)


model.load_adapter(peft_model_id)
device = "cuda"

messages = [
    {
        "role": "user",
        "content": """You are a helpful Vietnamese AI chatbot. Below is an instruction that describes a task. Write a response that appropriately completes the request. Your response should be in Vietnamese.
    Instruction:
    Viết công thức để nấu một món ngon từ thịt bò.""",
    },
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=500, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 4

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
22
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for date3k2/Mistral-7B-Instruct-vi-alpaca

Adapter
(216)
this model

Dataset used to train date3k2/Mistral-7B-Instruct-vi-alpaca

Collection including date3k2/Mistral-7B-Instruct-vi-alpaca