Xenova HF staff commited on
Commit
b4cb530
1 Parent(s): ffccee5

Update README.md (#1)

Browse files

- Update README.md (4c85ed6ab76c5318f7050916bb5b743d01a9caa2)

Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -1,21 +1,22 @@
1
  ---
2
  base_model: BAAI/bge-base-en-v1.5
3
  library_name: transformers.js
 
4
  ---
5
 
6
  https://huggingface.co/BAAI/bge-base-en-v1.5 with ONNX weights to be compatible with Transformers.js.
7
 
8
  ## Usage (Transformers.js)
9
 
10
- If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
11
  ```bash
12
- npm i @xenova/transformers
13
  ```
14
 
15
  You can then use the model to compute embeddings, as follows:
16
 
17
  ```js
18
- import { pipeline } from '@xenova/transformers';
19
 
20
  // Create a feature-extraction pipeline
21
  const extractor = await pipeline('feature-extraction', 'Xenova/bge-base-en-v1.5');
@@ -40,7 +41,7 @@ console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list
40
 
41
  You can also use the model for retrieval. For example:
42
  ```js
43
- import { pipeline, cos_sim } from '@xenova/transformers';
44
 
45
  // Create a feature-extraction pipeline
46
  const extractor = await pipeline('feature-extraction', 'Xenova/bge-small-en-v1.5');
@@ -76,5 +77,6 @@ console.log(scores);
76
  // ]
77
  ```
78
 
 
79
 
80
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
1
  ---
2
  base_model: BAAI/bge-base-en-v1.5
3
  library_name: transformers.js
4
+ license: mit
5
  ---
6
 
7
  https://huggingface.co/BAAI/bge-base-en-v1.5 with ONNX weights to be compatible with Transformers.js.
8
 
9
  ## Usage (Transformers.js)
10
 
11
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
12
  ```bash
13
+ npm i @huggingface/transformers
14
  ```
15
 
16
  You can then use the model to compute embeddings, as follows:
17
 
18
  ```js
19
+ import { pipeline } from '@huggingface/transformers';
20
 
21
  // Create a feature-extraction pipeline
22
  const extractor = await pipeline('feature-extraction', 'Xenova/bge-base-en-v1.5');
 
41
 
42
  You can also use the model for retrieval. For example:
43
  ```js
44
+ import { pipeline, cos_sim } from '@huggingface/transformers';
45
 
46
  // Create a feature-extraction pipeline
47
  const extractor = await pipeline('feature-extraction', 'Xenova/bge-small-en-v1.5');
 
77
  // ]
78
  ```
79
 
80
+ ---
81
 
82
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).