alvarobartt HF staff commited on
Commit
cbb9f7b
·
1 Parent(s): 823a0b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -56,7 +56,21 @@ We used a VM with 8 x A100 40GB hosted in GCP.
56
 
57
  ### Training Data
58
 
59
- We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [`argilla/ultrafeedback-binarized-preferences`](https://huggingface.co/argilla/ultrafeedback-binarized-preferences).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  ## Prompt template
62
 
 
56
 
57
  ### Training Data
58
 
59
+ We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [`argilla/ultrafeedback-binarized-preferences`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences).
60
+
61
+ TL;DR
62
+
63
+ After visually browsing around some examples using the sort and filter feature of Argilla (sort by highest rating for chosen responses), we noticed a strong mismatch between the `overall_score` in the original UF dataset (and the Zephyr train_prefs dataset) and the quality of the chosen response.
64
+
65
+ By adding the critique rationale to our Argilla Dataset, we confirmed the critique rationale was highly negative, whereas the rating was very high (the highest in fact: `10`).
66
+
67
+ See screenshot below for one example of this issue.
68
+
69
+ After some quick investigation, we identified hundreds of examples having the same issue, reported a bug on the UltraFeedback repo, and informed the H4 team.
70
+
71
+ While we're working on fixing the original dataset (already narrowed down ~2K problematic examples). We decided to leverage the multi-preference ratings, leading to Notus!
72
+
73
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/M9qCKyAB_G1MbVBAPeitd.png)
74
 
75
  ## Prompt template
76