andrewbai/distilabel-intel-orca-dpo-pairs_filtered_pref-skywork-8B Viewer • Updated about 18 hours ago • 6.42k • 13
andrewbai/distilabel-intel-orca-dpo-pairs_filtered_pref-skywork-8B Viewer • Updated about 18 hours ago • 6.42k • 13
R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT Model Paper • 2503.05132 • Published 6 days ago • 42
Defending LLMs against Jailbreaking Attacks via Backtranslation Paper • 2402.16459 • Published Feb 26, 2024 • 4
An Efficient Rehearsal Scheme for Catastrophic Forgetting Mitigation during Multi-stage Fine-tuning Paper • 2402.08096 • Published Feb 12, 2024
On the Loss of Context-awareness in General Instruction Fine-tuning Paper • 2411.02688 • Published Nov 5, 2024
andrewbai/ultrafeedback-binarized-preferences_ultrachat_sft-format_pml256_pref-skywork-8B_reject Viewer • Updated 26 days ago • 8.35k • 63
andrewbai/ultrafeedback-binarized-preferences_ultrachat_sft-format_pml256_pref-skywork-8B_reject Viewer • Updated 26 days ago • 8.35k • 63
andrewbai/ultrafeedback-binarized-preferences_ultrachat_sft-format_pml256_pref-skywork-8B_chosen Viewer • Updated 26 days ago • 8.35k • 51
andrewbai/ultrafeedback-binarized-preferences_ultrachat_sft-format_pml256_pref-skywork-8B_chosen Viewer • Updated 26 days ago • 8.35k • 51
andrewbai/ultrafeedback-binarized-preferences_ultrachat_alpaca-format_pml256_pref-urm-8B Viewer • Updated Jan 12 • 8.35k • 66
andrewbai/ultrafeedback-binarized-preferences_ultrachat_alpaca-format_pml256_pref-skywork-8B Viewer • Updated Jan 8 • 8.35k • 472
andrewbai/ultrafeedback-binarized-preferences_ultrachat_sft-format_pml256_v2 Viewer • Updated Dec 13, 2024 • 8.35k • 55