youssef boulaouane's picture

youssef boulaouane

byoussef

AI & ML interests

None yet

Recent Activity

liked a Space about 23 hours ago
osanseviero/InstantCoder
liked a Space 5 days ago
ByteDance/InfiniteYou-FLUX
liked a Space 10 days ago
Remade-AI/remade-effects
View all activity

Organizations

Social Post Explorers's profile picture Hugging Face Discord Community's profile picture

byoussef's activity

reacted to tianchez's post with πŸš€ about 1 month ago
view post
Post
4195
Introducing VLM-R1!

GRPO has helped DeepSeek R1 to learn reasoning. Can it also help VLMs perform stronger for general computer vision tasks?

The answer is YES and it generalizes better than SFT. We trained Qwen 2.5 VL 3B on RefCOCO (a visual grounding task) and eval on RefCOCO Val and RefGTA (an OOD task).

https://github.com/om-ai-lab/VLM-R1
Β·
reacted to andrewrreed's post with πŸ”₯ 3 months ago
view post
Post
2803
πŸš€ Supercharge your LLM apps with Langfuse on Hugging Face Spaces!

Langfuse brings end-to-end observability and tooling to accelerate your dev workflow from experiments through production

Now available as a Docker Space directly on the HF Hub! πŸ€—

πŸ” Trace everything: monitor LLM calls, retrieval, and agent actions with popular frameworks
1⃣ One-click deployment: on Spaces with persistent storage and integrated OAuth
πŸ›  Simple Prompt Management: Version, edit, and update without redeployment
βœ… Intuitive Evals: Collect user feedback, run model/prompt evaluations, and improve quality
πŸ“Š Dataset Creation: Build datasets directly from production data to enhance future performance

Kudos to the Langfuse team for this collab and the awesome, open-first product they’re building! πŸ‘ @marcklingen @Clemo @MJannik

πŸ”— Space: langfuse/langfuse-template-space
πŸ”— Docs: https://huggingface.co/docs/hub/spaces-sdks-docker-langfuse
  • 1 reply
Β·
reacted to merve's post with πŸš€ 4 months ago
view post
Post
2685
small but mighty πŸ”₯
you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM 🫰🏻 also with gradient accumulation simulated batch size is 16 ✨
I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work πŸ’ https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb