Pritam Kumar Ravi
PritamcodesAGI
AI & ML interests
LLM for security , Computational Neuroscience.
Recent Activity
reacted
to
hesamation's
post
with ❤️
12 days ago
this paper lists ways to make reasoning LLMs more efficient:
> enforce token limits per reasoning step
> route tasks to different models (small/large)
> compress reasoning chains during SFT
> reward based on reasoning length
> parallel search at test-time
and more...
@Xiaoye08 @yaful @Warrieryes
https://huggingface.co/papers/2503.21614
liked
a model
12 days ago
manycore-research/SpatialLM-Llama-1B
liked
a model
13 days ago
rasbt/llama-3.2-from-scratch