Fine-tuning for function calling purpose

Base model: Qwen/Qwen2.5-1.5B-Instruct

config = {
  "rank": 16,
  "alpha": 512,
  "learning_rate": 2e-5,
  "target_modules": ["attn", "mlps"]
}
Downloads last month
47
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including beyoru/S_funcCalling