Finetuning datasets used?

#6
by saattrupdan - opened
AI Sweden Model Hub org

Your model is performing extraordinarily good at ScandEval, way better than your AI-Sweden-Models/Llama-3-8B base model, so I'm a bit concerned that there's been some data leakage. It's hard to check since you haven't stated which datasets you used for finetuning, but perhaps you know if any of the ScandEval datasets were used?

The datasets in question are (at least) the following:

  • πŸ‡ΈπŸ‡ͺ SUC 3.0
  • πŸ‡ΈπŸ‡ͺ ScaLA-sv
  • πŸ‡ΈπŸ‡ͺ MMLU-sv
  • πŸ‡ΈπŸ‡ͺ HellaSwag-sv
  • πŸ‡©πŸ‡° DANSK
  • πŸ‡©πŸ‡° Angry Tweets
  • πŸ‡©πŸ‡° ScaLA-da
  • πŸ‡©πŸ‡° HellaSwag-da

I've also emailed 42labs about the same question, see if they can shed some light on it.

AI Sweden Model Hub org
β€’
edited 17 days ago

@saattrupdan Its only been trained on Swedish, so the great scores in Norweigian and Danish would be transfered, impossible to be a leak. https://huggingface.co/four-two-labs/lynx-micro#training-data (would be the same dataset). Did you get any more information from 42? Did you remove this model from ScandEval?

AI Sweden Model Hub org

Hey @timpal0l .

I removed it from ScandEval temporarily, while the question about the dataset was pending. I've added it back in now, now that it's clarified :)

Heard back from Ali as well, and he also reassured me that there was no leak, but could not say much about the dataset itself, due to its proprietary nature.

saattrupdan changed discussion status to closed

Sign up or log in to comment