vegetarian-mayo / README.md
nroggendorff's picture
Update README.md
9155296 verified
|
raw
history blame
1.78 kB
---
license: mit
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- sgd
model-index:
- name: mayo
results: []
datasets:
- nroggendorff/mayo
language:
- en
---
# Mayonnaise LLM
Mayo is a language model fine-tuned on the [Mayo dataset](https://huggingface.co/datasets/nroggendorff/mayo) using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the [TinyLlama/TinyLlama-1.1B-Chat-v1.0 model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
## Features
- Utilizes SFT and TRL techniques for improved performance
- Supports English language
## Usage
To use the Mayo LLM, you can load the model using the Hugging Face Transformers library:
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="nroggendorff/mayo")
question = "What color is the sky?"
conv = [{"role": "user", "content": question}]
response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content']
print(response)
```
To use the model with quantization:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_id = "nroggendorff/mayo"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
prompt = "<|user|>\nWhat color is the sky?</s>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=32)
generated_text = tokenizer.batch_decode(outputs)[0]
print(generated_text)
```
## License
This project is licensed under the MIT License.