File size: 1,778 Bytes
bce6060
d4edafa
4bd2a59
bce6060
e985339
 
d4edafa
bce6060
e985339
 
d4edafa
 
 
 
9fe06b0
 
d4edafa
9fe06b0
d4edafa
9fe06b0
d4edafa
9fe06b0
d4edafa
 
9fe06b0
d4edafa
9fe06b0
d4edafa
9fe06b0
d4edafa
 
9fe06b0
d4edafa
9fe06b0
d4edafa
 
9fe06b0
d4edafa
 
 
9fe06b0
d4edafa
42e1ff4
d4edafa
 
 
42e1ff4
d4edafa
 
 
 
 
 
42e1ff4
d4edafa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: mit
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- sgd
model-index:
- name: mayo
  results: []
datasets:
- nroggendorff/mayo
language:
- en
---

# Mayonnaise LLM

Mayo is a language model fine-tuned on the [Mayo dataset](https://huggingface.co/datasets/nroggendorff/mayo) using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the [TinyLlama/TinyLlama-1.1B-Chat-v1.0 model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).

## Features

- Utilizes SFT and TRL techniques for improved performance
- Supports English language

## Usage

To use the Mayo LLM, you can load the model using the Hugging Face Transformers library:

```python
from transformers import pipeline

pipe = pipeline("text-generation", model="nroggendorff/mayo")

question = "What color is the sky?"
conv = [{"role": "user", "content": question}]

response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content']
print(response)
```

To use the model with quantization:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

model_id = "nroggendorff/mayo"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)

prompt = "<|user|>\nWhat color is the sky?</s>\n"
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=32)

generated_text = tokenizer.batch_decode(outputs)[0]
print(generated_text)
```

## License

This project is licensed under the MIT License.