juliuslipp Xenova HF staff commited on
Commit
26a8408
1 Parent(s): 671dde7

Add transformers.js tag and example code (#3)

Browse files

- Add transformers.js tag and example code (1d6119f75512283ae79aef1138ff26ff53caa4ff)
- Update README.md (fc727a35ef69e27686c8c1f60780dbe8523b9140)


Co-authored-by: Joshua <[email protected]>

Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  tags:
3
  - mteb
 
4
  model-index:
5
  - name: mxbai-embed-2d-large-v1
6
  results:
@@ -2693,6 +2694,35 @@ print('similarities:', similarities)
2693
 
2694
  You’ll be able to use the models through our API as well. The API is coming soon and will have some exciting features. Stay tuned!
2695
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2696
  ## Evaluation
2697
 
2698
  Please find more information in our [blog post](https://mixedbread.ai/blog/mxbai-embed-2d-large-v1).
 
1
  ---
2
  tags:
3
  - mteb
4
+ - transformers.js
5
  model-index:
6
  - name: mxbai-embed-2d-large-v1
7
  results:
 
2694
 
2695
  You’ll be able to use the models through our API as well. The API is coming soon and will have some exciting features. Stay tuned!
2696
 
2697
+ ### Transformers.js
2698
+
2699
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
2700
+ ```bash
2701
+ npm i @xenova/transformers
2702
+ ```
2703
+
2704
+ You can then use the model to compute embeddings as follows:
2705
+
2706
+ ```js
2707
+ import { pipeline, cos_sim } from '@xenova/transformers';
2708
+
2709
+ // Create a feature-extraction pipeline
2710
+ const extractor = await pipeline('feature-extraction', 'mixedbread-ai/mxbai-embed-2d-large-v1', {
2711
+ quantized: false, // (Optional) remove this line to use the 8-bit quantized model
2712
+ });
2713
+
2714
+ // Compute sentence embeddings (with `cls` pooling)
2715
+ const sentences = ['Who is german and likes bread?', 'Everybody in Germany.' ];
2716
+ const output = await extractor(sentences, { pooling: 'cls' });
2717
+
2718
+ // Set embedding size and truncate embeddings
2719
+ const new_embedding_size = 768;
2720
+ const truncated = output.slice(null, [0, new_embedding_size]);
2721
+
2722
+ // Compute cosine similarity
2723
+ console.log(cos_sim(truncated[0].data, truncated[1].data)); // 0.6979532021425204
2724
+ ```
2725
+
2726
  ## Evaluation
2727
 
2728
  Please find more information in our [blog post](https://mixedbread.ai/blog/mxbai-embed-2d-large-v1).