AI & ML interests

AI for the physical world, TinyML, Embedded Systems

Recent Activity

UsefulSensors's activity

Xenovaย 
posted an update 12 days ago
view post
Post
6447
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. โšก๏ธ

Generate 10 seconds of speech in ~1 second for $0.

What will you build? ๐Ÿ”ฅ
webml-community/kokoro-webgpu

The most difficult part was getting the model running in the first place, but the next steps are simple:
โœ‚๏ธ Implement sentence splitting, allowing for streamed responses
๐ŸŒ Multilingual support (only phonemization left)

Who wants to help?
ยท

Add float TFLite files

#7 opened 13 days ago by
petewarden
Xenovaย 
posted an update about 1 month ago
view post
Post
5835
Introducing Kokoro.js, a new JavaScript library for running Kokoro TTS, an 82 million parameter text-to-speech model, 100% locally in the browser w/ WASM. Powered by ๐Ÿค— Transformers.js. WebGPU support coming soon!
๐Ÿ‘‰ npm i kokoro-js ๐Ÿ‘ˆ

Try it out yourself: webml-community/kokoro-web
Link to models/samples: onnx-community/Kokoro-82M-ONNX

You can get started in just a few lines of code!
import { KokoroTTS } from "kokoro-js";

const tts = await KokoroTTS.from_pretrained(
  "onnx-community/Kokoro-82M-ONNX",
  { dtype: "q8" }, // fp32, fp16, q8, q4, q4f16
);

const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text,
  { voice: "af_sky" }, // See `tts.list_voices()`
);
audio.save("audio.wav");

Huge kudos to the Kokoro TTS community, especially taylorchu for the ONNX exports and Hexgrad for the amazing project! None of this would be possible without you all! ๐Ÿค—

The model is also extremely resilient to quantization. The smallest variant is only 86 MB in size (down from the original 326 MB), with no noticeable difference in audio quality! ๐Ÿคฏ
ยท
Xenovaย 
posted an update about 2 months ago
view post
Post
8295
First project of 2025: Vision Transformer Explorer

I built a web app to interactively explore the self-attention maps produced by ViTs. This explains what the model is focusing on when making predictions, and provides insights into its inner workings! ๐Ÿคฏ

Try it out yourself! ๐Ÿ‘‡
webml-community/attention-visualization

Source code: https://github.com/huggingface/transformers.js-examples/tree/main/attention-visualization
Xenovaย 
posted an update 2 months ago
view post
Post
4158
Introducing Moonshine Web: real-time speech recognition running 100% locally in your browser!
๐Ÿš€ Faster and more accurate than Whisper
๐Ÿ”’ Privacy-focused (no data leaves your device)
โšก๏ธ WebGPU accelerated (w/ WASM fallback)
๐Ÿ”ฅ Powered by ONNX Runtime Web and Transformers.js

Demo: webml-community/moonshine-web
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/moonshine-web
ยท
Xenovaย 
posted an update 2 months ago
view post
Post
3219
Introducing TTS WebGPU: The first ever text-to-speech web app built with WebGPU acceleration! ๐Ÿ”ฅ High-quality and natural speech generation that runs 100% locally in your browser, powered by OuteTTS and Transformers.js. ๐Ÿค— Try it out yourself!

Demo: webml-community/text-to-speech-webgpu
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/text-to-speech-webgpu
Model: onnx-community/OuteTTS-0.2-500M (ONNX), OuteAI/OuteTTS-0.2-500M (PyTorch)
Xenovaย 
posted an update 3 months ago
view post
Post
4084
We just released Transformers.js v3.1 and you're not going to believe what's now possible in the browser w/ WebGPU! ๐Ÿคฏ Let's take a look:
๐Ÿ”€ Janus from Deepseek for unified multimodal understanding and generation (Text-to-Image and Image-Text-to-Text)
๐Ÿ‘๏ธ Qwen2-VL from Qwen for dynamic-resolution image understanding
๐Ÿ”ข JinaCLIP from Jina AI for general-purpose multilingual multimodal embeddings
๐ŸŒ‹ LLaVA-OneVision from ByteDance for Image-Text-to-Text generation
๐Ÿคธโ€โ™€๏ธ ViTPose for pose estimation
๐Ÿ“„ MGP-STR for optical character recognition (OCR)
๐Ÿ“ˆ PatchTST & PatchTSMixer for time series forecasting

That's right, everything running 100% locally in your browser (no data sent to a server)! ๐Ÿ”ฅ Huge for privacy!

Check out the release notes for more information. ๐Ÿ‘‡
https://github.com/huggingface/transformers.js/releases/tag/3.1.0

Demo link (+ source code): webml-community/Janus-1.3B-WebGPU
Xenovaย 
posted an update 3 months ago
view post
Post
5741
Have you tried out ๐Ÿค— Transformers.js v3? Here are the new features:
โšก WebGPU support (up to 100x faster than WASM)
๐Ÿ”ข New quantization formats (dtypes)
๐Ÿ› 120 supported architectures in total
๐Ÿ“‚ 25 new example projects and templates
๐Ÿค– Over 1200 pre-converted models
๐ŸŒ Node.js (ESM + CJS), Deno, and Bun compatibility
๐Ÿก A new home on GitHub and NPM

Get started with npm i @huggingface/transformers.

Learn more in our blog post: https://huggingface.co/blog/transformersjs-v3
  • 3 replies
ยท
Xenovaย 
posted an update 6 months ago
view post
Post
14008
I can't believe this... Phi-3.5-mini (3.8B) running in-browser at ~90 tokens/second on WebGPU w/ Transformers.js and ONNX Runtime Web! ๐Ÿคฏ Since everything runs 100% locally, no messages are sent to a server โ€” a huge win for privacy!
- ๐Ÿค— Demo: webml-community/phi-3.5-webgpu
- ๐Ÿง‘โ€๐Ÿ’ป Source code: https://github.com/huggingface/transformers.js-examples/tree/main/phi-3.5-webgpu
ยท
Xenovaย 
posted an update 6 months ago
view post
Post
15049
I'm excited to announce that Transformers.js V3 is finally available on NPM! ๐Ÿ”ฅ State-of-the-art Machine Learning for the web, now with WebGPU support! ๐Ÿคฏโšก๏ธ

Install it from NPM with:
๐š—๐š™๐š– ๐š’ @๐š‘๐šž๐š๐š๐š’๐š—๐š๐š๐šŠ๐šŒ๐šŽ/๐š๐š›๐šŠ๐š—๐šœ๐š๐š˜๐š›๐š–๐šŽ๐š›๐šœ

or via CDN, for example: https://v2.scrimba.com/s0lmm0qh1q

Segment Anything demo: webml-community/segment-anything-webgpu
ยท
Xenovaย 
posted an update 7 months ago
view post
Post
8046
Introducing Whisper Diarization: Multilingual speech recognition with word-level timestamps and speaker segmentation, running 100% locally in your browser thanks to ๐Ÿค— Transformers.js!

Tested on this iconic Letterman interview w/ Grace Hopper from 1983!
- Demo: Xenova/whisper-speaker-diarization
- Source code: Xenova/whisper-speaker-diarization
  • 1 reply
ยท
Xenovaย 
posted an update 7 months ago
view post
Post
6851
Introducing Whisper Timestamped: Multilingual speech recognition with word-level timestamps, running 100% locally in your browser thanks to ๐Ÿค— Transformers.js! Check it out!
๐Ÿ‘‰ Xenova/whisper-word-level-timestamps ๐Ÿ‘ˆ

This unlocks a world of possibilities for in-browser video editing! ๐Ÿคฏ What will you build? ๐Ÿ˜

Source code: https://github.com/xenova/transformers.js/tree/v3/examples/whisper-word-timestamps
  • 1 reply
ยท
Xenovaย 
posted an update 7 months ago
Xenovaย 
posted an update 8 months ago
view post
Post
6063
Florence-2, the new vision foundation model by Microsoft, can now run 100% locally in your browser on WebGPU, thanks to Transformers.js! ๐Ÿค—๐Ÿคฏ

It supports tasks like image captioning, optical character recognition, object detection, and many more! ๐Ÿ˜ WOW!
- Demo: Xenova/florence2-webgpu
- Models: https://huggingface.co/models?library=transformers.js&other=florence2
- Source code: https://github.com/xenova/transformers.js/tree/v3/examples/florence2-webgpu
Xenovaย 
posted an update 8 months ago
view post
Post
10305
Introducing Whisper WebGPU: Blazingly-fast ML-powered speech recognition directly in your browser! ๐Ÿš€ It supports multilingual transcription and translation across 100 languages! ๐Ÿคฏ

The model runs locally, meaning no data leaves your device! ๐Ÿ˜

Check it out! ๐Ÿ‘‡
- Demo: Xenova/whisper-webgpu
- Source code: https://github.com/xenova/whisper-web/tree/experimental-webgpu
ยท
Xenovaย 
posted an update 10 months ago
view post
Post
11541
Introducing Phi-3 WebGPU, a private and powerful AI chatbot that runs 100% locally in your browser, powered by ๐Ÿค— Transformers.js and onnxruntime-web!

๐Ÿ”’ On-device inference: no data sent to a server
โšก๏ธ WebGPU-accelerated (> 20 t/s)
๐Ÿ“ฅ Model downloaded once and cached

Try it out: Xenova/experimental-phi3-webgpu
ยท