Lewis Tunstall PRO

lewtun

AI & ML interests

LLMs, LLMs, LLMs

Recent Activity

Articles

Organizations

Hugging Face's profile picture AutoNLP's profile picture Natural Language Processing with Transformers's profile picture BigScience Workshop's profile picture Ought's profile picture Hugging Face Internal Testing Organization's profile picture Testing Benchmarks on the Hub's profile picture Hugging Face Course's profile picture NLP en ES's profile picture GEM benchmark's profile picture SetFit's profile picture Benchmarks Hosting's profile picture GEM benchmark submissions's profile picture ALPS test's profile picture Evaluation datasets's profile picture Deep Learning for Particle Physicists's profile picture fast.ai community's profile picture DreamBooth Hackathon's profile picture trl internal testing's profile picture SomosNLP's profile picture HF Course Demos's profile picture Marsyas  (Music Analysis, Retrieval and Synthesis for Audio Signals)'s profile picture ONNXConfig for all's profile picture How to teach Hugging Face?'s profile picture Jet Universe's profile picture Evaluation on the Hub's profile picture The ML Landscape of Top Taggers's profile picture HuggingFaceM4's profile picture HF Canonical Model Maintainers's profile picture TRL's profile picture BigCode's profile picture Hugging Face H4's profile picture Inference Endpoints's profile picture Hugging Face OSS Metrics's profile picture BigCode Data's profile picture Reading Group's profile picture Hugging Face H4 Community's profile picture Hugging Face TB Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture EPFL LLM Team's profile picture H4 Alignment Handbook's profile picture ZeroGPU Explorers's profile picture h4-argilla-collab's profile picture Project-Numina's profile picture ORPO Explorers's profile picture Kato's profile picture Distillation Hugs's profile picture Hugging Face Discord Community's profile picture Data Agents's profile picture nltpt's profile picture IOPO Experiments's profile picture Hugging Face FineVideo's profile picture Reliable Agents's profile picture Hugging Face Science's profile picture HF CMU Collab's profile picture

lewtun's activity

reacted to prithivMLmods's post with πŸš€ 6 days ago
view post
Post
5380
Reasoning SmolLM2 πŸš€

🎯Fine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.

πŸ”₯Blog : https://huggingface.co/blog/prithivMLmods/smollm2-ft

πŸ”Ό Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF

🀠 Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M




posted an update 6 days ago
view post
Post
3165
I was initially pretty sceptical about Meta's Coconut paper [1] because the largest perf gains were reported on toy linguistic problems. However, these results on machine translation are pretty impressive!

https://x.com/casper_hansen_/status/1875872309996855343

Together with the recent PRIME method [2] for scaling RL, reasoning for open models is looking pretty exciting for 2025!

[1] Training Large Language Models to Reason in a Continuous Latent Space (2412.06769)
[2] https://huggingface.co/blog/ganqu/prime
posted an update 13 days ago
view post
Post
2067
This paper ( HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs (2412.18925)) has a really interesting recipe for inducing o1-like behaviour in Llama models:

* Iteratively sample CoTs from the model, using a mix of different search strategies. This gives you something like Stream of Search via prompting.
* Verify correctness of each CoT using GPT-4o (needed because exact match doesn't work well in medicine where there are lots of aliases)
* Use GPT-4o to reformat the concatenated CoTs into a single stream that includes smooth transitions like "hmm, wait" etc that one sees in o1
* Use the resulting data for SFT & RL
* Use sparse rewards from GPT-4o to guide RL training. They find RL gives an average ~3 point boost across medical benchmarks and SFT on this data already gives a strong improvement.

Applying this strategy to other domains could be quite promising, provided the training data can be formulated with verifiable problems!
  • 1 reply
Β·
posted an update 26 days ago
view post
Post
6708
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute πŸ”₯

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

πŸ“ˆ Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

πŸŽ„ Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

🧭 Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
Β·
reacted to julien-c's post with πŸ€—β€οΈπŸ”₯ about 1 month ago
view post
Post
8212
After some heated discussion πŸ”₯, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community πŸ”₯

cc: @reach-vb @pierric @victor and the HF team
Β·
replied to dvilasuero's post 7 months ago
view reply

Welcome to the team @dvilasuero and Argilla! It’s been really nice collaborating with you on various projects around LLM alignment and I’m excited to see what we’ll build next together!

reacted to dvilasuero's post with πŸ€β€οΈπŸ€—πŸš€πŸ”₯ 7 months ago
view post
Post
8102
Today is a huge day in Argilla’s history. We couldn’t be more excited to share this with the community: we’re joining Hugging Face!

We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.

To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and AmΓ©lie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
Β·
replied to BramVanroy's post 7 months ago
view reply

I am not aware of any public ablations which validate this, but I suspect it has become less important for chat models where one is more interested in the performance via human evaluation instead of academic benchmarks like MMLU (which are OK for selecting base models, but less so for chat/instruct ones)

reacted to JustinLin610's post with πŸš€πŸ”₯ 9 months ago
view post
Post
3111
Finally, Qwen1.5-110B is out! With weights and demo!

Blog: https://qwenlm.github.io/blog/qwen1.5-110b/
Demo: Qwen/Qwen1.5-110B-Chat-demo
Base: Qwen/Qwen1.5-110B
Chat: Qwen/Qwen1.5-110B-Chat

This model has some specific features:
* GQA
* 32K token context length
* Multilingual support

We feel good about its performance on benchmarks, including those for base models and chat models, but we still need more of your testing and feedback to help us know its capabilities and limitations!

Additionally, the base model has not learned chatml tokens. Yeah if you use chatml format, you need to be careful about it!

Enjoy and stay tuned for Qwen2!



  • 1 reply
Β·
reacted to Sentdex's post with πŸ‘ 9 months ago
view post
Post
5859
Benchmarks!

I have lately been diving deep into the main benchmarks we all use to evaluate and compare models.

If you've never actually looked under the hood for how benchmarks work, check out the LM eval harness from EleutherAI: https://github.com/EleutherAI/lm-evaluation-harness

+ check out the benchmark datasets, you can find the ones for the LLM leaderboard on the about tab here: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, then click the dataset and actually peak at the data that comprises these benchmarks.

It feels to me like benchmarks only represent a tiny portion of what we actually use and want LLMs for, and I doubt I'm alone in that sentiment.

Beyond this, the actual evaluations of responses from models are extremely strict and often use even rudimentary NLP techniques when, at this point, we have LLMs themselves that are more than capable at evaluating and scoring responses.

It feels like we've made great strides in the quality of LLMs themselves, but almost no change in the quality of how we benchmark.

If you have any ideas for how benchmarks could be a better assessment of an LLM, or know of good research papers that tackle this challenge, please share!
  • 3 replies
Β·
reacted to VictorSanh's post with β€οΈπŸš€πŸ”₯ 9 months ago
view post
Post
2802
Glad to see Idefics2 making its way into the awesome OpenVLM Leaderboard which ranks VLMs. πŸ†
2nd in its category (<10B parameters and open weights)!

While InternLM-XComposer2 uses proprietary data, Idefics2 is built solely using openly available data.

Leaderboard: opencompass/open_vlm_leaderboard
Model: HuggingFaceM4/idefics2-8b
Β·