MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion
Abstract
Despite the remarkable capabilities of large language models across various tasks, their continued scaling faces a critical challenge: the scarcity of high-quality pretraining data. While model architectures continue to evolve, the natural language data struggles to scale up. To tackle this bottleneck, we propose MAssive Genre-Audience~(MAGA) reformulation method, which systematic synthesizes diverse, contextually-rich pretraining data from existing corpus. This work makes three main contributions: (1) We propose MAGA reformulation method, a lightweight and scalable approach for pretraining corpus expansion, and build a 770B tokens MAGACorpus. (2) We evaluate MAGACorpus with different data budget scaling strategies, demonstrating consistent improvements across various model sizes (134M-13B), establishing the necessity for next-generation large-scale synthetic pretraining language models. (3) Through comprehensive analysis, we investigate prompt engineering's impact on synthetic training collapse and reveal limitations in conventional collapse detection metrics using validation losses. Our work shows that MAGA can substantially expand training datasets while maintaining quality, offering a reliably pathway for scaling models beyond data limitations.
Community
![](https://cdn-uploads.huggingface.co/production/uploads/64b764bffdb702b3d8640610/WIEom2dItQvCyQciQW9pz.png)
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for LLM Training (2025)
- Analyzing Similarity Metrics for Data Selection for Language Model Pretraining (2025)
- LLMic: Romanian Foundation Language Model (2025)
- PISCO: Pretty Simple Compression for Retrieval-Augmented Generation (2025)
- GEXIA: Granularity Expansion and Iterative Approximation for Scalable Multi-grained Video-language Learning (2024)
- Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models (2025)
- ChocoLlama: Lessons Learned From Teaching Llamas Dutch (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper