Garreth Lee's picture

Garreth Lee PRO

garrethlee

AI & ML interests

None yet

Recent Activity

upvoted a paper 5 days ago
Qwen2.5 Technical Report
updated a Space 11 days ago
huggingface/number-tokenization-blog
liked a dataset 15 days ago
HuggingFaceFW/fineweb-2
View all activity

Organizations

Hugging Face's profile picture Hugging Face TB Research's profile picture HuggingFaceFW's profile picture HuggingFaceFW-Dev's profile picture Hugging Face Science's profile picture AI Starter Pack's profile picture

garrethlee's activity

updated a Space 19 days ago
posted an update 19 days ago
view post
Post
1886
The latest o1 model from OpenAI is still unable to answer 9.11 > 9.9 correctly 🤔

A possible explanation? Tokenization - and our latest work investigates how it affects a model's ability to do math!

In this blog post, we discuss:
🔢 The different ways numbers are tokenized in modern LLMs
🧪 Our detailed approach in comparing these various methods
🥪 How we got a free boost in arithmetic performance by adding a few lines of code to the base Llama 3 tokenizer
👑 and a definitive, best tokenization method for math in LLMs!

Check out our work here: huggingface/number-tokenization-blog
  • 2 replies
·
reacted to burtenshaw's post with ❤️ 22 days ago
view post
Post
2564
For anyone looking to boost their LLM fine-tuning and alignment skills this decemeber. We're running this free and open course called smol course. It’s not big like Li Yin and @mlabonne , it’s just smol.

👷 It focuses on practical use cases, so if you’re working on something, bring it along.

👯‍♀️ It’s peer reviewed and open so you can discuss and get feedback.

🤘 If you’re already a smol pro, feel free to drop a star or issue.

> > Part 1 starts now, and it’s on instruction tuning!

https://github.com/huggingface/smol-course
posted an update 27 days ago
view post
Post
360
Does tokenizing numbers into single digits outperform three-digit or BPE tokenization for arithmetic tasks? We explore various tokenization methods in our upcoming blog (releasing next week 👀)!

🔹 Bringing objectivity to comparisons

Existing comparisons of number tokenization methods often ignore the difference in models’ compute budgets: larger tokenizer vocabularies naturally lead to more parameters, which produces less objective comparisons of model performances due to more “learning” being done by these bigger models.

We addressed this by keeping architectures consistent but adjusting the number of hidden layers to produce roughly equal parameter counts.

🔹 Key results

We trained models on the same data mix and evaluated their performance on various arithmetic tasks (digits, operations, floats vs. ints):

- When splitting evals based on operators, single-digit tokenization consistently outperformed other methods.
- Right-to-left tokenization (which I covered in a previous post) matched or exceeded left-to-right approaches in all tasks.

All in all, single-digit tokenization is best compared to other methods, and similar to our previous post’s finding, R2L works better than L2R tokenization, although not as significant as the gap between single-digit and the rest!

The wait is almost over 🤗, the full report is coming next week - stay tuned!
reacted to jsulz's post with 🔥 about 1 month ago
view post
Post
2911
When the XetHub crew joined Hugging Face this fall, @erinys and I started brainstorming how to share our work to replace Git LFS on the Hub. Uploading and downloading large models and datasets takes precious time. That’s where our chunk-based approach comes in.

Instead of versioning files (like Git and Git LFS), we version variable-sized chunks of data. For the Hugging Face community, this means:

⏩ Only upload the chunks that changed.
🚀 Download just the updates, not the whole file.
🧠 We store your file as deduplicated chunks

In our benchmarks, we found that using CDC to store iterative model and dataset version led to transfer speedups of ~2x, but this isn’t just a performance boost. It’s a rethinking of how we manage models and datasets on the Hub.

We're planning on our new storage backend to the Hub in early 2025 - check out our blog to dive deeper, and let us know: how could this improve your workflows?

https://huggingface.co/blog/from-files-to-chunks

Convert dataset to Parquet

#5 opened about 1 month ago by
garrethlee