Description This is a fine tuned google/siglip-so400m-patch14-384 for the purpose of quantizing the embeddings to binary. It's only using the first 1024 embeddings, so if you use all 1152 of them your results will be less than desirable.
I updated the model today (April 30th) and evals are much better than before, but I'm continuing training so perf should only get better from here.
Evals Coming soon
- Downloads last month
- 4
Inference API (serverless) does not yet support transformers models for this pipeline type.