Rubra Llama-3 70B AWQ

Original model: rubra-ai/Meta-Llama-3-70B-Instruct

AWQ quant config:

{ "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

Model description

The model is the result of further post-training meta-llama/Meta-Llama-3-70B. This model is designed for high performance in various instruction-following tasks and complex interactions, including multi-turn function calling and detailed conversations.

Training Data

The model underwent additional training on a proprietary dataset encompassing diverse instruction-following, chat, and function calling data. This post-training process enhances the model's ability to integrate tools and manage complex interaction scenarios effectively.

How to use

Refer to https://docs.rubra.ai/inference/transformers for usage. Feel free to ask/open issues up in our Github repo: https://github.com/rubra-ai/rubra

Limitations and Bias

While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.

Ethical Considerations

Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.

Acknowledgements

We would like to thank Meta for the model.

Contact Information

For questions or comments about the model, please reach out to the rubra team.

Citation

If you use this work, please cite it as:

@misc {rubra_ai_2024,
    author       = { Sanjay Nadhavajhala and Yingbei Tong },
    title        = { Rubra-Meta-Llama-3-70B-Instruct },
    year         = 2024,
    url          = { https://huggingface.co/rubra-ai/Meta-Llama-3-70B-Instruct },
    doi          = { 10.57967/hf/2643 },
    publisher    = { Hugging Face }
}
Downloads last month
8
Safetensors
Model size
11.3B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results