--- license: odc-by task_categories: - text-generation language: - en pretty_name: Primus-FineWeb tags: - cybersecurity - pretraining - FineWeb size_categories: - 1M ⭐ Please download the dataset from [here](https://huggingface.co/datasets/trendmicro-ailab/Primus-FineWeb). # PRIMUS: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training ## 🤗 Primus-FineWeb The **Primus-FineWeb** dataset is constructed by filtering cybersecurity-related text from FineWeb, a refined version of Common Crawl. We began by leveraging _Primus-Seed_, a high-quality dataset of manually curated cybersecurity text, as positive samples. We then sampled ten times the amount of data from FineWeb as negative samples and trained a **binary cybersecurity classifier** based on TinyBERT. Using this classifier, we assigned each text in FineWeb a score between **0 and 1** and filtered out texts with a score greater than **0.003**, creating the Primus-FineWeb with 15.3 billion tokens. However, after discovering a significant amount of duplicate content, we performed deduplication, reducing the final dataset to **🔥 2.57 billion tokens of cybersecurity corpus**. 🚀🚀 For more details, see our paper: [https://arxiv.org/abs/2502.11191](https://arxiv.org/abs/2502.11191) --- ## Why was the threshold set at 0.003? We divided the score range (0-1) into several bins and randomly sampled 50 examples from each bin. These samples were then scored by GPT-4o to determine the proportion of text that was "_truly_" cybersecurity-related. We found that if the score was below 0.003, the proportion of cybersecurity text fell below 50%. Threshold Selection ## FineWeb: Cybersecurity Score vs. Token Count Cybersecurity Score vs. Token Count --- ## License This dataset is released under the **ODC-By** license. However, you must still comply with the **FineWeb license** and the **Common Crawl Terms of Use**.