--- language: - mk tags: - macedonian - text - corpus - cleaned - deduplicated datasets: - LVSTCK/macedonian-corpus-cleaned-deduplicated license: cc-by-4.0 --- # Macedonian Corpus - Cleaned and Deduplicated ## ๐ŸŒŸ Key Highlights - **Size**: 16.78 GB, **Word Count**: 1.47 billion - Deduplicated using **MinHash** to remove redundant documents. ## ๐Ÿ“‹ Overview Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources. This version of the corpus is both **cleaned** and **deduplicated**, processed to ensure high-quality text. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), it consists of: Other versions: [raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw), [cleaned with less aggresive deduplication](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-cleaned) **1. C4-like Filtering.** Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation). - Removed lines containing irrelevant content such as "javascript" and lines with any word exceeding 1000 characters. - Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use." - Filtered out lines with fewer than 3 words and lacking terminal punctuation (e.g., ., ?, !). - Excluded lines where punctuation was missing at the end. **2. Gopher-like Filtering.** Filters out documents with excessive bullet points or repetitive ellipses to ensure completeness. - Limited the presence of bullet points by rejecting documents where more than 90% of lines started with bullet-like characters (e.g., -, โ€ข, *). - Filtered out documents where more than 30% of lines ended with ellipses (...) to avoid overly repetitive or incomplete content. **3. Language Filtering.** Retains only high-confidence Macedonian text. - Applied FT176LID model to detect and retain only high-confidence Macedonian text. - Excluded non-Macedonian content - language confidence score below 0.65. **4. Sentence Deduplication.** Removes duplicate sentences to improve dataset quality and reduce over-representation. - Splits documents into sentences. - Identifies duplicates using unique sentence signatures. - Removes flagged duplicates. **5. PII Filtering.** - Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers. **6. Text Chunking and Cleaning:** Breaks texts into manageable chunks, each not exceeding 4000 characters, applied only for data sourced from MMORE. This step also involves correcting common errors that were identified after qualitative evaluation, deleting specific unwanted patterns texts. **7. MinHash Deduplication:** Uses MinHash for efficient document-level deduplication. The full implementation of all filtering steps can be found on [GitHub](https://github.com/LVSTCK/macedonian-corpus/blob/main/filtering). ### ๐Ÿ“š Dataset Splits | Origin | Size (GB) | Words (B) | Percentage | |----------------|-----------|-----------|------------| | HuggingFace (fineweb-2) | 7.85 | 0.73 B | 49.55% | | HPLT-2 | 5.80 | 0.54 B | 36.87% | | CLARIN (MaCoCu-mk 2.0) | 1.94 | 0.18 B | 12.39% | | Wikipedia (mkwiki) | 0.13 | 0.01 B | 0.83% | | Other (MMORE) | 0.04 | 0.004 B | 0.25% | | Common Voice | 0.02 | 0.002 B | 0.12% | | **Total** | **16.78** | **1.47 B** | **100%** | ## โš™๏ธ Usage The corpus is provided as a JSONL file, where each line contains two fields: - `text`: The raw textual data. - `source`: The source of the text. ```json {"text": "ะŸั€ะธะผะตั€ ั‚ะตะบัั‚.", "source": "fineweb-2"} ``` ## ๐Ÿ“ฌ Contact For inquiries, feedback, or contributions, please feel free to reach out to the core team: - [Stefan Krsteski](https://www.linkedin.com/in/stefan-krsteski-136abb235/) [๐Ÿ“ง](mailto:stefan.krsteski@gmail.com) - [Borjan Sazdov](https://www.linkedin.com/in/borjan-sazdov-4b2187211/) [๐Ÿ“ง](mailto:borjansazdov@yahoo.com) - [Matea Tashkovska](https://www.linkedin.com/in/matea-tashkovska-774603198/) [๐Ÿ“ง](mailto:matea_tas@yahoo.com) ### ๐ŸŽ‰ Special Thanks Also a big thank you to the following individuals: - [Said Gรผrbรผz](https://www.linkedin.com/in/saidgurbuz/?originalSubdomain=tr) - [Vinko Sabolcec](https://huggingface.co/vsabolcec) ## โš–๏ธ Legal ### Notice and Takedown Policy We adhere strictly to copyright and data ownership laws. If you identify any material within the corpus that infringes on your rights, please contact us following the detailed steps provided in this section to have it reviewed and potentially removed. ### License Creative Commons Attribution 4.0 (CC BY 4.0)