crumb's picture
Update README.md
ddc2436
|
raw
history blame
1.69 kB
metadata
license: mit
datasets:
  - crumb/flan-ul2-tinystories
language:
  - en

Tinystories-30m-UL2

GPT-4 generated model card

Model Details

  • Model Name: GPTNeoX/flan-ul2-tinystories
  • Model Type: GPTNeoXForCausalLM (Language Modeling)
  • Model Training Details: The model is trained using crumb/flan-ul2-tinystories which contains around a quarter of a million examples generated from Flan-UL2 (20b) with the prompt "Write a short story using the vocabulary of a first-grader."

Model Description

This model is trained with the specific purpose of generating short narratives using a vocabulary limited to the level of a first-grader. In terms of complexity and language usage, the model is designed to produce simplistic and easily comprehensible text.

Learning from text generated by Flan-UL2 (20b), the model adopts a simple storyline layout and a minimalistic vocabulary, which it recognizes are easier to learn and replicate.

Training Data

The model is trained on the crumb/flan-ul2-tinystories dataset, created with the help of Flan-UL2 (20b). The data is designed to follow the format of a simple, first-grader-level narrative, which aids the model in learning simple vocabulary and sentence structure.

Usage

This model serves as a meaningful research tool in exploring the learning tendencies of smaller language models and their ability to grasp simplified language constructs. Its specific training set effectively maps the idea that a constrained vocabulary and simplistic story layouts are inherently easier to learn.