Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
- Finetuned on meta-llama/Meta-Llama-3.1-8B-Instruct for best available general performance
- Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning
Version
This is the 2024-09-16 release of Shining Valiant 2 for Llama 3.1 8b.
We've improved and open-sourced our new baseline science-instruct dataset. This release features improvements in physics, chemistry, biology, and computer science.
Future upgrades will continue to expand Shining Valiant's technical knowledge base.
Help us and recommend Shining Valiant 2 to your friends!
Prompting Guide
Shining Valiant 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-8B-ShiningValiant2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Shining Valiant, a highly capable chat AI."},
{"role": "user", "content": "Describe the role of transformation matrices in 3D graphics."}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
The Model
Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.
The current version of Shining Valiant 2 is trained on technical knowledge using sequelbox/Celestia and general chat capability using sequelbox/Supernova.
Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical. (As a general note: we're hoping to replace and open-source this part of Shining Valiant's dataset with synthetic data soon!)
Shining Valiant 2 is created by Valiant Labs.
Follow us on X for updates on our models!
We care about open source. For everyone to use.
We encourage others to finetune further from our models.
- Downloads last month
- 2,400
Model tree for ValiantLabs/Llama3.1-8B-ShiningValiant2
Base model
meta-llama/Llama-3.1-8BDatasets used to train ValiantLabs/Llama3.1-8B-ShiningValiant2
Space using ValiantLabs/Llama3.1-8B-ShiningValiant2 1
Collection including ValiantLabs/Llama3.1-8B-ShiningValiant2
Evaluation results
- acc on Winogrande (5-Shot)self-reported77.350
- acc on MMLU College Biology (5-Shot)self-reported76.390
- acc on MMLU High School Biology (5-Shot)self-reported79.030
- acc on MMLU College Chemistry (5-Shot)self-reported50.000
- acc on MMLU High School Chemistry (5-Shot)self-reported53.200
- acc on MMLU College Physics (5-Shot)self-reported43.140
- acc on MMLU High School Physics (5-Shot)self-reported42.380
- acc on MMLU College Computer Science (5-Shot)self-reported55.000
- acc on MMLU High School Computer Science (5-Shot)self-reported66.000
- acc on MMLU STEM (5-Shot)self-reported55.570