Advice on replicating selection for other languages
Great to see your educational domain selection from Fineweb 2 for Japanese!
I was wondering if you had any advice on how you trained the classifier for data selection as a lot of the documentation seems to be in Japanese :) I am working on multilingual BabyLM with collaborators and we are aiming to carry out similar domain selection for a range of other languages including Dutch, Hungarian, Japanese, Indonesian, Korean and more.
Any help would be much appreciated!
Marcell Fekete
@Varmer Thank you for your interest in my FineWeb 2 Japanese educational domain selection project, Marcell!
Your multilingual BabyLM project sounds excellent. I'm happy to share some thoughts on classifier training for domain selection:
For the classifier, mMiniLMv2-L6-H384 should be sufficient - it's compact and fast. Creating a multilingual classifier would likely be the most straightforward approach.
For scoring, I recommend using the additive scoring method from fineweb-edu, which has proven effective.
High-performance multilingual models with permissive licenses like deepseek-r1 are now available. These would be excellent for generating educational scores across multiple languages.
You might also want to look at https://huggingface.co/datasets/epfml/FineWeb2-HQ, which uses embedding-based score filtering. This could provide an alternative approach to your selection process.
Best of luck with your multilingual BabyLM project!