TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
arxiv.org /abs/2305.07759
cross-posted from: https://programming.dev/post/133153
Quote:
In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.
Related:
- Models (you can try them online):
- An interview with the authors (highly recommended): The Tiny Model Revolution with Ronen Eldan and Yuanzhi Li of Microsoft Research
0 comments