peterpeter8585/Llama-3-Open-Ko-8B-Instruct-preview-Q5_K_M-GGUF Text Generation • Updated 5 days ago • 23
peterpeter8585/Llama-3-Open-Ko-8B-Instruct-preview-Q8_0-GGUF Text Generation • Updated 5 days ago • 21
Zephyr ORPO Collection Models and datasets to align LLMs with Odds Ratio Preference Optimisation (ORPO). Recipes here: https://github.com/huggingface/alignment-handbook • 3 items • Updated Apr 12, 2024 • 17