Model Details

Model Architecture:

urLLM-KO_EN-2.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from princeton-nlp/Sheared-LLaMA-2.7B.

Training Corpus

The model was trained using selected datasets from Modu Corpus, Korean Wikipedia and Kaggle English News (approximately total 36GB).

Vocab Expansion

The expanded vocab size is 51385.

Model Card Contact

For errors or additional questions about details in this model card, contact [email protected] .

Downloads last month
2,487
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for URP/urllm-ko_en-2.7b

Quantizations
1 model