nroggendorff commited on
Commit
599980a
1 Parent(s): 2e32158

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -32
README.md CHANGED
@@ -1,54 +1,70 @@
1
  ---
2
- license: apache-2.0
3
  base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
  tags:
5
- - trl
6
- - sft
7
- - generated_from_trainer
8
  model-index:
9
- - name: mayo
10
- results: []
 
 
 
 
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # mayo
17
 
18
- This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
19
 
20
- ## Model description
 
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
 
27
 
28
- ## Training and evaluation data
29
 
30
- More information needed
 
31
 
32
- ## Training procedure
 
 
33
 
34
- ### Training hyperparameters
35
 
36
- The following hyperparameters were used during training:
37
- - learning_rate: 0.0001
38
- - train_batch_size: 4
39
- - eval_batch_size: 16
40
- - seed: 42
41
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
- - lr_scheduler_type: linear
43
- - training_steps: 4600
44
 
45
- ### Training results
 
 
 
 
 
46
 
 
47
 
 
 
48
 
49
- ### Framework versions
 
50
 
51
- - Transformers 4.39.3
52
- - Pytorch 2.1.2
53
- - Datasets 2.18.0
54
- - Tokenizers 0.15.2
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
  base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
  tags:
5
+ - trl
6
+ - sft
 
7
  model-index:
8
+ - name: mayo
9
+ results: []
10
+ datasets:
11
+ - nroggendorff/mayo
12
+ language:
13
+ - en
14
  ---
15
 
16
+ # Mayonnaise LLM
 
17
 
18
+ Mayo is a language model fine-tuned on the [Mayo dataset](https://huggingface.co/datasets/nroggendorff/mayo) using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the [TinyLlama/TinyLlama-1.1B-Chat-v1.0 model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
19
 
20
+ ## Features
21
 
22
+ - Utilizes SFT and TRL techniques for improved performance
23
+ - Supports English language
24
 
25
+ ## Usage
26
 
27
+ To use the Mayo LLM, you can load the model using the Hugging Face Transformers library:
28
 
29
+ ```python
30
+ from transformers import pipeline
31
 
32
+ pipe = pipeline("text-generation", model="nroggendorff/mayo")
33
 
34
+ question = "What color is the sky?"
35
+ conv = [{"role": "system", "content": "You are a very bored real human named Noa Roggendorff."}, {"role": "user", "content": question}]
36
 
37
+ response = pipe(conv, max_new_tokens=2048)[0]['generated_text'][-1]['content']
38
+ print(response)
39
+ ```
40
 
41
+ To use the model with quantization:
42
 
43
+ ```python
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
45
+ import torch
 
 
 
 
 
46
 
47
+ bnb_config = BitsAndBytesConfig(
48
+ load_in_4bit=True,
49
+ bnb_4bit_use_double_quant=True,
50
+ bnb_4bit_quant_type="nf4",
51
+ bnb_4bit_compute_dtype=torch.bfloat16
52
+ )
53
 
54
+ model_id = "nroggendorff/mayo"
55
 
56
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
57
+ model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
58
 
59
+ prompt = "<|user|>What color is the sky?</s>"
60
+ inputs = tokenizer(prompt, return_tensors="pt")
61
 
62
+ outputs = model.generate(**inputs, max_new_tokens=10)
63
+
64
+ generated_text = tokenizer.batch_decode(outputs)[0]
65
+ print(generated_text)
66
+ ```
67
+
68
+ ## License
69
+
70
+ This project is licensed under the MIT License.