language:
- en
pipeline_tag: text-generation
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-70b
- llama-3
- llama-3-instruct
- llama-3-instruct-70b
- 70b
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- logic
- rationality
- advanced
- expert
- technical
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
datasets:
- sequelbox/Celestia
- sequelbox/Spurline
- sequelbox/Supernova
model_type: llama
model-index:
- name: Llama3.1-70B-ShiningValiant2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.93
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Biology (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 93.75
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU High School Biology (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 91.94
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU Conceptual Physics (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.7
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Physics (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.78
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU High School Physics (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.91
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Chemistry (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 55
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU High School Chemistry (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.86
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU Astronomy (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 89.47
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Computer Science (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 66
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 53.55
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 52.39
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 27.19
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.02
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.48
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 46.37
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
license: llama3.1
Shining Valiant 2 is a chat model built on Llama 3.1 70b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
- Finetuned on meta-llama/Meta-Llama-3.1-70B-Instruct for best available general performance
- Trained on a variety of high quality open source data; focused on science, engineering, technical knowledge, and structured reasoning
- Also available for Llama 3.1 8b and Llama 3.2 3b!
Version
This is the 2024-10-30 release of Shining Valiant 2 for Llama 3.1 70b.
This release uses our newest datasets, open-sourced for everyone's use, including our expanded science-instruct dataset. This release features improvements in logical thinking and structured reasoning as well as physics, chemistry, biology, astronomy, Earth science, computer science, and information theory.
Future upgrades will continue to expand Shining Valiant's technical knowledge base.
Help us and recommend Shining Valiant 2 to your friends!
Prompting Guide
Shining Valiant 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-70B-ShiningValiant2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "What is the role of lysosomal enzymes in the regulation of cellular processes?"}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
The Model
Shining Valiant 2 is built on top of Llama 3.1 70b Instruct.
The current version of Shining Valiant 2 is trained on technical knowledge using sequelbox/Celestia, complex reasoning using sequelbox/Spurline, and general chat capability using sequelbox/Supernova.
We're super excited that Shining Valiant's dataset has been fully open-sourced! She's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.
Shining Valiant 2 is created by Valiant Labs.
Follow us on X for updates on our models!
We care about open source. For everyone to use.
We encourage others to finetune further from our models.