BritishWerewolf commited on
Commit
905da59
·
1 Parent(s): 42784e5

Add ONNX model (fp32).

Browse files
Files changed (4) hide show
  1. README.md +53 -0
  2. config.json +14 -0
  3. onnx/model.onnx +3 -0
  4. preprocessor_config.json +27 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ pipeline_tag: image-segmentation
4
+ tags:
5
+ - isnet
6
+ - dis
7
+ - anime
8
+ - image-segmentation
9
+ - mask-generation
10
+ - transformers.js
11
  license: apache-2.0
12
+ language:
13
+ - en
14
  ---
15
+ # IS-Net-Anime
16
+
17
+ ## Model Description
18
+ IS-Net-Anime is a deep learning model designed to provide interactive image segmentation capabilities. The model allows users to refine segmentation masks through user interactions, making it highly effective for tasks that require precise and detailed segmentation results.
19
+
20
+ ## Usage
21
+ Perform mask generation with `BritishWerewolf/IS-Net-Anime`.
22
+
23
+ ### Example
24
+ ```javascript
25
+ import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
26
+
27
+ const img_url = 'https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png';
28
+ const image = await RawImage.read(img_url);
29
+
30
+ const processor = await AutoProcessor.from_pretrained('BritishWerewolf/IS-Net-Anime');
31
+ const processed = await processor(image);
32
+
33
+ const model = await AutoModel.from_pretrained('BritishWerewolf/IS-Net-Anime', {
34
+ dtype: 'fp32',
35
+ });
36
+
37
+ const output = await model({ input: processed.pixel_values });
38
+ // {
39
+ // mask: Tensor {
40
+ // dims: [ 1, 1024, 1024 ],
41
+ // type: 'uint8',
42
+ // data: Uint8Array(1048576) [ ... ],
43
+ // size: 1048576
44
+ // }
45
+ // }
46
+ ```
47
+
48
+ ### Inference
49
+ To use the model for inference, you can follow the example provided above. The `AutoProcessor` and `AutoModel` classes from the `transformers` library make it easy to load the model and processor.
50
+
51
+ ## Credits
52
+ * [`rembg`](https://github.com/danielgatis/rembg) for the ONNX model.
53
+ * The authors of the original IS-Net-Anime model can be credited at https://github.com/SkyTNT/anime-segmentation.
54
+
55
+ ## Licence
56
+ This model is licensed under the Apache License 2.0 to match the original IS-Net-Anime model.
config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BritishWerewolf/IS-Net-Anime",
3
+ "model_type": "u2net",
4
+ "architectures": [
5
+ "U2NetModel"
6
+ ],
7
+ "input_name": "img",
8
+ "input_shape": [1, 3, 1024, 1024],
9
+ "output_composite": "mask",
10
+ "output_names": [
11
+ "mask"
12
+ ],
13
+ "output_shape": [1, 1024, 1024]
14
+ }
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f15622d853e8260172812b657053460e20806f04b9e05147d49af7bed31a6e99
3
+ size 176069933
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "processor_class": "U2NetProcessor",
3
+ "image_processor_type": "U2NetImageProcessor",
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_pad": true,
7
+ "do_rescale": true,
8
+ "do_resize": true,
9
+ "keep_aspect_ratio": true,
10
+ "image_mean": [
11
+ 0.485,
12
+ 0.456,
13
+ 0.406
14
+ ],
15
+ "image_std": [
16
+ 1.0,
17
+ 1.0,
18
+ 1.0
19
+ ],
20
+ "pad_size": {
21
+ "width": 1024,
22
+ "height": 1024
23
+ },
24
+ "size": {
25
+ "longest_edge": 1024
26
+ }
27
+ }