Avelina Hadji-Kyriacou

Avelina

AI & ML interests

Trying to squeeze the most performance out of small language models to bring AI inference to the user, and keep personal data out of the cloud.

Recent Activity

liked a dataset 6 days ago
argilla/magpie-ultra-v1.0
liked a dataset 6 days ago
AI-MO/NuminaMath-CoT
liked a dataset 6 days ago
HuggingFaceTB/finemath
View all activity

Organizations

Social Post Explorers's profile picture

Avelina's activity

New activity in PleIAs/common_corpus 27 days ago

How to filter for language?

2
#2 opened about 1 month ago by
Rijgersberg
New activity in PleIAs/common_corpus about 1 month ago

Is Common Corpus Pre-Shuffled?

1
#4 opened about 1 month ago by
Avelina
New activity in Avelina/UltraSteer-v0-flat 2 months ago
replied to their post 4 months ago
view reply

Each attribute should be in the range zero to four, however the included labels are given as is by the reward model which means some values may be outside this range (although only slightly) so it is recommended that you clamp all attributes between zero and four.

We included the unclamped versions because you may want the exact outputs given by the reward model for some specific reason, and if we had clamped these values in the dataset you would be unable to recover them.