Reverb
Collection
Ozone's most advanced series of AI language models yet.
•
5 items
•
Updated
•
1
Reverb-7b is a 7 billion parameter language model developed by Ozone AI. It is a causal language model designed for text generation and various downstream tasks. This is the third model release by Ozone AI.
Reverb-7b is intended for research and chatting purposes in natural language processing. Potential use cases include:
Limitations:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ozone-ai/Reverb-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "The quick brown fox jumps over the lazy dog."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(input_ids, max_length=50)
print(tokenizer.decode(generation_output[0]))
The following table shows the performance of Reverb-7b on various benchmarks:
Benchmark | Metric | Value |
---|---|---|
MMLU Pro | Average Accuracy | 0.4006 |
MMLU Pro | Biology | 0.6904 |
MMLU Pro | Business | 0.3143 |
MMLU Pro | Chemistry | 0.2314 |
MMLU Pro | Computer Science | 0.4000 |
MMLU Pro | Economics | 0.5758 |
MMLU Pro | Engineering | 0.3148 |
MMLU Pro | Health | 0.5183 |
MMLU Pro | History | 0.4934 |
MMLU Pro | Law | 0.3315 |
MMLU Pro | Math | 0.2983 |
MMLU Pro | Other | 0.4372 |
MMLU Pro | Philosophy | 0.4409 |
MMLU Pro | Physics | 0.2910 |
MMLU Pro | Psychology | 0.5990 |
For questions or feedback, please contact us through [email protected] or https://ozone-ai.com
Built with Qwen Users of this model must agree with the Qwen license agreement
Vneq - CEO @ Ozone AI Tristan - CEO @ ShuttleAI, CTO @ Ozone AI