MeetPEFT: Parameter Efficient Fine-Tuning on LLMs for Long Meeting Summarization

We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.

The model is fine-tuned on MeetingBank and QMSum datasets.

Downloads last month
22
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train MeetPEFT/MeetPEFT-7B-16K