{full_repo_name} Merged (Optimized)
This is an optimized merged version of the fine-tuned Qwen2.5 Coder model. It combines:
- Base model: {base_model_name}
- Fine-tuned adapter: {adapter_path}
The model has been optimized using float16 precision and efficient serialization.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"{full_repo_name}",
trust_remote_code=True,
torch_dtype=torch.float16 # Use float16 for efficiency
)
tokenizer = AutoTokenizer.from_pretrained("{full_repo_name}", trust_remote_code=True)
- Downloads last month
- 28