crumb commited on
Commit
c7efecc
·
1 Parent(s): 5fe3610

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - crumb/flan-ul2-tinystories
5
+ language:
6
+ - en
7
+ ---
8
+ # Tinystories-30m-UL2
9
+
10
+ *GPT-4 generated model card*
11
+
12
+ ## Model Details
13
+
14
+ - **Model Name**: [crumb/opentinystories-30m-base](https://huggingface.co/crumb/opentinystories-30m-base)
15
+ - **Model Type**: GPTNeoXForCausalLM
16
+ - **Model Training Details**: The model is trained using [crumb/flan-ul2-tinystories](https://huggingface.co/datasets/crumb/flan-ul2-tinystories) which contains around a quarter of a million examples generated from Flan-UL2 (20b) with the prompt "Write a short story using the vocabulary of a first-grader."
17
+
18
+ ## Model Description
19
+
20
+ This model is trained with the specific purpose of generating short narratives using a vocabulary limited to the level of a first-grader. In terms of complexity and language usage, the model is designed to produce simplistic and easily comprehensible text.
21
+
22
+ Learning from text generated by Flan-UL2 (20b), the model adopts a simple storyline layout and a minimalistic vocabulary, which it recognizes are easier to learn and replicate.
23
+
24
+ ## Training
25
+
26
+ The model is trained for four epochs on the [crumb/flan-ul2-tinystories](https://huggingface.co/datasets/crumb/flan-ul2-tinystories) dataset (inspired by [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)), created with the help of Flan-UL2 (20b), as opposed to GPT-3.5/4 in the original Tinystories. The data is designed to follow the format of a simple, first-grader-level narrative, which aids the model in learning simple vocabulary and sentence structure.
27
+
28
+ Training arguments:
29
+
30
+ ```
31
+ per_device_train_batch_size=8,
32
+ gradient_accumulation_steps=16,
33
+ warmup_steps=128,
34
+ num_train_epochs=4,
35
+ learning_rate=2e-4,
36
+ eval_steps=64,
37
+ optim="adamw_torch",
38
+ ```
39
+
40
+ ## Usage
41
+
42
+ This model serves as a meaningful research tool in exploring the learning tendencies of smaller language models and their ability to grasp simplified language constructs. Its specific training set effectively maps the idea that a constrained vocabulary and simplistic story layouts are inherently easier to learn.
43
+
44
+ ## Validation and Performance
45
+
46
+ The model's performance was evaluated using a held-out validation set, which constitutes 1% of the original dataset. During evaluation, the model achieved a loss of N. During training, the model achieved a loss of N
47
+
48
+ ![](https://cdn.discordapp.com/attachments/1074346695191711875/1126796435577393213/image.png)