This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.

rope_theta was set to 1000000.0. Trained with Axolotl.

Downloads last month
357
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mattshumer/Llama-3-8B-16K

Finetunes
4 models
Merges
1 model
Quantizations
4 models

Dataset used to train mattshumer/Llama-3-8B-16K

Space using mattshumer/Llama-3-8B-16K 1