YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This repository contains a pruned and orthogonalized version of the Llama 3 8B model. The model was created by leveraging the pruning method described in the PruneGPT repository to remove unimportant layers from the original Llama 3 8B model. Additionally, the model components were subjected to Orthogonal Activation Steering (OAS), also known as "abliteration", to mitigate refusals and improve versatility for various scenarios.

Model Description

The pruned and orthogonalized Llama 3 8B model was created by using the grimjim/Llama-3-Oasis-v1-OAS-8B as the base. This base model is a merge of pre-trained language models that were already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals and improve versatility for various scenarios. The following models were merged to create the Llama-3-Oasis-v1-OAS-8B base model:

The merge was performed using the task arithmetic merge method, with mlabonne/NeuralDaredevil-8B-abliterated serving as the base model. Upon this merged base model, we applied the pruning method described in the PruneGPT repository to remove unimportant layers, resulting in a more efficient and compact model. The final model is versatile and suitable for both positive and negative roleplay scenarios as well as storytelling. However, please exercise caution when using this model.

The model is built upon the Meta Llama 3 architecture.

Usage

To use this model, you can load it using the HuggingFace Transformers library in Python. Here's an example:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "GazTrab/Pruned-Llama-3-Oasis"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Perform text generation or other tasks using the loaded model

Please refer to the HuggingFace Transformers documentation for more details on how to use the model for various tasks.

Acknowledgements

We would like to acknowledge the following resources and repositories that were used in the creation of this model:

License

This model is released under the Apache License 2.0.

Downloads last month
16
Safetensors
Model size
6.5B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.