Finetuning datasets used?
Your model is performing extraordinarily good at ScandEval, way better than your AI-Sweden-Models/Llama-3-8B base model, so I'm a bit concerned that there's been some data leakage. It's hard to check since you haven't stated which datasets you used for finetuning, but perhaps you know if any of the ScandEval datasets were used?
The datasets in question are (at least) the following:
- πΈπͺ SUC 3.0
- πΈπͺ ScaLA-sv
- πΈπͺ MMLU-sv
- πΈπͺ HellaSwag-sv
- π©π° DANSK
- π©π° Angry Tweets
- π©π° ScaLA-da
- π©π° HellaSwag-da
I've also emailed 42labs about the same question, see if they can shed some light on it.
@saattrupdan Its only been trained on Swedish, so the great scores in Norweigian and Danish would be transfered, impossible to be a leak. https://huggingface.co/four-two-labs/lynx-micro#training-data (would be the same dataset). Did you get any more information from 42? Did you remove this model from ScandEval?
Hey @timpal0l .
I removed it from ScandEval temporarily, while the question about the dataset was pending. I've added it back in now, now that it's clarified :)
Heard back from Ali as well, and he also reassured me that there was no leak, but could not say much about the dataset itself, due to its proprietary nature.