I tried the huge Jina AI regex, but it failed for my (admittedly messy) documents, e.g. from EUR-LEX. Their free segmenter API is really cool but unfortunately times out on my huge docs (~100 pages): https://jina.ai/segmenter/
Also, I tried to write a Vanilla JS chunker with a simple, adjustable hierarchical logic (inspired from the above). I think it does a decent job for the few lines of code: https://do-me.github.io/js-text-chunker/
WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: Xenova/webgpu-embedding-benchmark Chrome currently works out of the box, Firefox requires some tweaking.
WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: do-me/SemanticFinder Happy to hear your ideas!
Hey HuggingFace, love your open source attitude and particularly transformers.js for embedding models! Your current integration "use this model" gives you the transformers.js code, but there is no quick way to really test a model in one click. SemanticFinder (do-me/SemanticFinder) offers such an integration for all compatible feature-extraction models! All you need to do is add a URL parameter with the model ID to it, like so: https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5. You can also decide between quantized and normal mode with https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5&quantized=false. Maybe that would do for a HF integration? I know it's a small open source project, but I really believe that it provides value for devs before deciding for one model or the other. Also, it's much easier than having to spin up a notebook, install dependencies etc.. It's private, so you could even do some real-world evaluation on personal data without having to worry about third-party services data policies. Happy to hear the community's thoughts!
Get daily/weekly/monthly notifications about latest trending feature-extraction models compatible with transformers.js for semantic search! All open source built on GitHub Actions and ntfy.sh.
I'm also providing daily updated tables (filterable and sortable by onnx model size too!) if you want to have a look only once in a while. Download what suits you best: csv, xlsx, parquet, json, html.
Would you like to monitor other models/tags? Feel free to open a PR :)
I noticed that when I use the HF model search with these tags: - feature-extraction - transformers.js it is not showing all models that are actually tagged.
Hey, I just added three useful advanced use cases to do-me/SemanticFinder. SemanticFinder is a collection of embeddings for public documents or books. You can create your own index file from any text or pdf and save it without installing or downloading anything. Try yourself: