Neuronx model for Zephyr 7B β
This repository contains AWS Inferentia2 and neuronx
compatible checkpoints for HuggingFaceH4/zephyr-7b-beta.
You can find detailed information about the base model on its Model Card.
This model has been exported to the neuron
format using specific input_shapes
and compiler
parameters detailed in the paragraphs below.
Please refer to the 🤗 optimum-neuron
documentation for an explanation of these parameters.
Usage on Amazon SageMaker
coming soon
Usage with 🤗 optimum-neuron
from optimum.neuron import pipeline
pipe = pipeline('text-generation', 'aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2')
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
This repository contains tags specific to versions of neuronx
. When using with 🤗 optimum-neuron
, use the repo revision specific to the version of neuronx
you are using, to load the right serialized checkpoints.
Arguments passed during export
input_shapes
{
"batch_size": 4,
"sequence_length": 2048,
}
compiler_args
{
"auto_cast_type": "fp16",
"num_cores": 2,
}
- Downloads last month
- 19
Model tree for aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2
Base model
mistralai/Mistral-7B-v0.1Datasets used to train aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard62.031
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.356
- f1 score on Drop (3-Shot)validation set Open LLM Leaderboard9.662
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.449
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard12.737
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard61.070
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.743
- win rate on AlpacaEvalsource0.906
- score on MT-Benchsource7.340