Chatml format. The dataset is about 1400 entries ranging from 8-16k. It's split three ways between long context multi turn chat, long context summarization, and writing analysis. Full fine tune using linear a rope scale factor of 2.0. Trained for five epochs with a learning rate of 1e-5.

Downloads last month
61
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for openerotica/Llama-3-lima-nsfw-16k-test

Merges
1 model
Quantizations
1 model