Safetensors
llama
Edit model card

Experimental Development Models

These development models are designed specifically for experimental and testing purposes.

They have been trained using our pre-trained BPE tokenizer with a vocabulary size of 61,440.

Model Details:

These models were initially developed for internal testing and did not undergo extensive training. The output quality will not be suitable for production use or serious applications. You should expect inconsistent, potentially low-quality outputs.

Benchmark Performance:

Benchmark Oute-Dev-0.7B-Checkpoint-40B Oute-Dev-1B-Checkpoint-40B
ARC-C (0-shot) 28.24 26.19
ARC-E (0-shot) 55.13 57.32
HellaSWAG (0-shot) 41.20 43.70
PIQA (0-shot) 68.39 69.59
Winogrande (0-shot) 54.14 50.51

Disclaimer

By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.

Downloads last month
13
Safetensors
Model size
1.11B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train OuteAI/Oute-Dev-1B-Checkpoint-40B