Xenova HF staff commited on
Commit
abaf785
·
verified ·
1 Parent(s): 640b329

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ pipeline_tag: object-detection
4
+ license: agpl-3.0
5
+ ---
6
+
7
+ # YOLOv10: Real-Time End-to-End Object Detection
8
+
9
+ ONNX weights for https://github.com/THU-MIG/yolov10.
10
+
11
+ Latency-accuracy trade-offs | Size-accuracy trade-offs
12
+ :-------------------------:|:-------------------------:
13
+ ![latency-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cXru_kY_pRt4n4mHERnFp.png) | ![size-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/8apBp9fEZW2gHVdwBN-nC.png)
14
+
15
+ ## Usage (Transformers.js)
16
+
17
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
18
+ ```bash
19
+ npm i @xenova/transformers
20
+ ```
21
+
22
+ **Example:** Perform object-detection.
23
+ ```js
24
+ import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
25
+
26
+ // Load model
27
+ const model = await AutoModel.from_pretrained('onnx-community/yolov10b', {
28
+ // quantized: false, // (Optional) Use unquantized version.
29
+ })
30
+
31
+ // Load processor
32
+ const processor = await AutoProcessor.from_pretrained('onnx-community/yolov10b');
33
+
34
+ // Read image and run processor
35
+ const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
36
+ const image = await RawImage.read(url);
37
+ const { pixel_values } = await processor(image);
38
+
39
+ // Run object detection
40
+ const { output0 } = await model({ images: pixel_values });
41
+ const predictions = output0.tolist()[0];
42
+ const threshold = 0.5;
43
+ for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
44
+ if (score < threshold) continue;
45
+ const bbox = [xmin, ymin, xmax, ymax].map(x => x.toFixed(2)).join(', ')
46
+ console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`)
47
+ }
48
+ Found "car" at [447.84, 378.56, 639.25, 478.67] with score 0.94.
49
+ Found "car" at [176.77, 336.73, 398.79, 418.09] with score 0.94.
50
+ Found "bicycle" at [351.96, 526.97, 463.51, 588.49] with score 0.93.
51
+ Found "bicycle" at [449.39, 477.36, 555.63, 538.10] with score 0.91.
52
+ Found "person" at [474.16, 429.78, 534.73, 534.29] with score 0.91.
53
+ Found "bicycle" at [1.51, 517.97, 110.14, 584.14] with score 0.90.
54
+ Found "person" at [31.05, 469.64, 79.11, 567.90] with score 0.90.
55
+ Found "person" at [394.04, 479.20, 442.46, 587.36] with score 0.89.
56
+ // ...
57
+ ```