Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent output
huangbin
bin110
AI & ML interests
None yet
Recent Activity
replied to
csabakecskemeti's
post
3 days ago
I've run the open llm leaderboard evaluations + hellaswag on https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B and compared to https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct and at first glance R1 do not beat Llama overall.
If anyone wants to double check the results are posted here:
https://github.com/csabakecskemeti/lm_eval_results
Am I made some mistake, or (at least this distilled version) not as good/better than the competition?
I'll run the same on the Qwen 7B distilled version too.
Organizations
None yet