MeetPEFT: Parameter Efficient Fine-Tuning on LLMs for Long Meeting Summarization

We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.

The model is fine-tuned on MeetingBank and QMSum datasets.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train MeetPEFT/MeetPEFT-7B-16K