Skip Navigation

TinyStories: How Small Can Language Models Be and Still Speak Coherent English?

arxiv.org /abs/2305.07759

cross-posted from: https://programming.dev/post/133153

Quote:

In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.

Related:

0

TinyStories: How Small Can Language Models Be and Still Speak Coherent English?

10 5
Machine Learning @lemmy.ml โ˜† Yฯƒษ ฦšิ‹ฯƒส‚ โ˜† @lemmy.ml

TinyStories: How Small Can Language Models Be and Still Speak Coherent English?

1 1
0 comments