nroggendorff commited on
Commit
a108913
1 Parent(s): 2d17abb

End of training

Browse files
README.md CHANGED
@@ -1,44 +1,54 @@
1
  ---
2
- license: mit
3
  base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
  tags:
5
- - trl
6
- - sft
7
- - generated_from_trainer
8
  model-index:
9
- - name: mayo
10
- results: []
11
- datasets:
12
- - nroggendorff/mayo
13
- language:
14
- - en
15
  ---
16
 
17
- # Mayonnaise LLM
 
18
 
19
- Mayo is a language model fine-tuned on the [Mayo dataset](https://huggingface.co/datasets/nroggendorff/mayo) using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the [TinyLlama/TinyLlama-1.1B-Chat-v1.0 model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
20
 
21
- ## Features
22
 
23
- - Utilizes SFT and TRL techniques for improved performance
24
- - Supports English language
25
 
26
- ## Usage
27
 
28
- To use the Mayo LLM, you can load the model using the Hugging Face Transformers library:
29
 
30
- ```python
31
- from transformers import pipeline
32
 
33
- pipe = pipeline("text-generation", model="nroggendorff/mayo")
34
 
35
- question = "What color is the sky?"
36
- conv = [{"role": "system", "content": "You are a very bored real human named Noa Roggendorff."}, {"role": "user", "content": question}]
37
 
38
- response = pipe(conv, max_new_tokens=2048)[0]['generated_text'][-1]['content']
39
- print(response)
40
- ```
41
 
42
- ## License
43
 
44
- This project is licensed under the MIT License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
  tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
  model-index:
9
+ - name: mayo
10
+ results: []
 
 
 
 
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # mayo
17
 
18
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
19
 
20
+ ## Model description
 
21
 
22
+ More information needed
23
 
24
+ ## Intended uses & limitations
25
 
26
+ More information needed
 
27
 
28
+ ## Training and evaluation data
29
 
30
+ More information needed
 
31
 
32
+ ## Training procedure
 
 
33
 
34
+ ### Training hyperparameters
35
 
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 0.0001
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 16
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - training_steps: 112
44
+
45
+ ### Training results
46
+
47
+
48
+
49
+ ### Framework versions
50
+
51
+ - Transformers 4.39.3
52
+ - Pytorch 2.1.2
53
+ - Datasets 2.18.0
54
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4bf88e0326a6c17f1fe79d7eff7ce439ac55f29a28a42f35ba08a2a591bfe13b
3
  size 4400216536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5278f46e22c27a62d6c4ee7f329a16bdab2ceff2a7e63bb2f2c6fd1543317bcb
3
  size 4400216536
runs/May30_22-37-22_3b387eaf06d3/events.out.tfevents.1717108651.3b387eaf06d3.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:507ccd859c03f251195c22c4bb490eb544c490739cde9b0df831a5cbc446d70a
3
+ size 4987
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d51c1d91515fbc096cd698b5097f96e4b11d449623b4d3328e8c35434a7949c4
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf4a078541af5d6fe7b61a31b42e7811aa2944ce63cdf1004eaaaacc3fb23f58
3
  size 4920