Uploaded model

  • Developed by: beyoru
  • License: apache-2.0
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "beyoru/MCQ-qv-8"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "Bạn là một trợ lý thông minh có thể tạo câu hỏi trắc nghiệm trong mọi ngữ cảnh"},
    {"role": "user", "content": "<YOUR CONTEXT>"}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    do_sample=True
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Notes:

  • For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on q,v base on LoRA paper.
  • Fine-tuned lora with rank = 8 and alpha = 16, epoch = 1, linear (optim)
  • DoRA

Commit directly to the main branch Open as a pull request to the main branch Commit changes Update README.md Add an extended description... Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.

Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for beyoru/MCQ-qv-8

Base model

Qwen/Qwen2.5-3B
Finetuned
(50)
this model

Dataset used to train beyoru/MCQ-qv-8

Collection including beyoru/MCQ-qv-8