File size: 419 Bytes
a139f95
 
 
bbae94b
 
 
 
1
2
3
4
5
6
7
---
license: other
---
This is a LLama LoRA fine tuned on top of WizardLM-7B with this dataset: https://huggingface.co/datasets/paolorechia/medium-size-generated-tasks
It's meant mostly as an proof of concept to see how fine tuning may improve the performance of coding agents that rely on the Langchain framework.

To use this LoRA, you can use my repo as starting point: https://github.com/paolorechia/learn-langchain