|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Opus-Samantha-Llama-3-8B |
|
|
|
|
|
|
|
Opus-Samantha-Llama-3-8B is a SFT model made with [AutoSloth](https://colab.research.google.com/drive/1Zo0sVEb2lqdsUm9dy2PTzGySxdF9CNkc#scrollTo=MmLkhAjzYyJ4) by [macadeliccc](https://huggingface.co/macadeliccc) |
|
|
|
## Process |
|
|
|
- Original Model: [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) |
|
- Datatset: [macadeliccc/opus_samantha](https://huggingface.co/datasets/macadeliccc/opus_samantha) |
|
|
|
- Learning Rate: 2e-05 |
|
- Steps: 2772 |
|
- Warmup Steps: 277 |
|
- Per Device Train Batch Size: 2 |
|
- Gradient Accumulation Steps 1 |
|
- Optimizer: paged_adamw_8bit |
|
- Max Sequence Length: 4096 |
|
- Max Prompt Length: 2048 |
|
- Max Length: 2048 |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline |
|
|
|
model = "macadeliccc/Opus-Samantha-Llama-3-8B" |
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
|
|
# Example prompt |
|
prompt = "Your example prompt here" |
|
|
|
# Generate a response |
|
model = AutoModelForCausalLM.from_pretrained(model) |
|
pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
outputs = pipeline(prompt, max_length=50, num_return_sequences=1) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
<div align="center"> |
|
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" height="50" align="center" /> |
|
</div> |