Update model card
Browse files
README.md
CHANGED
@@ -4,6 +4,7 @@ license: apache-2.0
|
|
4 |
tags:
|
5 |
- text-classfication
|
6 |
- int8
|
|
|
7 |
- Intel® Neural Compressor
|
8 |
- PostTrainingStatic
|
9 |
datasets:
|
@@ -46,18 +47,20 @@ The same model is provided in two different formats: PyTorch and ONNX.
|
|
46 |
| Primary intended users | Anyone |
|
47 |
| Out-of-scope uses | This model is already fine-tuned and quantized to INT8. It is not suitable for further fine-tuning in this form. To fine-tune your own model, you can start with [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). |
|
48 |
|
49 |
-
#### Load PyTorch model with Optimum
|
50 |
```python
|
51 |
-
from optimum.intel.neural_compressor
|
52 |
-
|
53 |
-
|
|
|
54 |
```
|
55 |
|
56 |
-
#### Load ONNX model:
|
57 |
```python
|
58 |
from optimum.onnxruntime import ORTModelForSequenceClassification
|
59 |
-
|
60 |
-
|
|
|
61 |
```
|
62 |
|
63 |
| Factors | Description |
|
@@ -109,4 +112,4 @@ model = ORTModelForSequenceClassification.from_pretrained(
|
|
109 |
year = {2022},
|
110 |
url = {https://huggingface.co/Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static},
|
111 |
}
|
112 |
-
```
|
|
|
4 |
tags:
|
5 |
- text-classfication
|
6 |
- int8
|
7 |
+
- neural-compressor
|
8 |
- Intel® Neural Compressor
|
9 |
- PostTrainingStatic
|
10 |
datasets:
|
|
|
47 |
| Primary intended users | Anyone |
|
48 |
| Out-of-scope uses | This model is already fine-tuned and quantized to INT8. It is not suitable for further fine-tuning in this form. To fine-tune your own model, you can start with [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). |
|
49 |
|
50 |
+
#### Load the PyTorch model with Optimum Intel
|
51 |
```python
|
52 |
+
from optimum.intel.neural_compressor import INCModelForSequenceClassification
|
53 |
+
|
54 |
+
model_id = "Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static"
|
55 |
+
int8_model = INCModelForSequenceClassification.from_pretrained(model_id)
|
56 |
```
|
57 |
|
58 |
+
#### Load the ONNX model with Optimum:
|
59 |
```python
|
60 |
from optimum.onnxruntime import ORTModelForSequenceClassification
|
61 |
+
|
62 |
+
model_id = "Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static"
|
63 |
+
int8_model = ORTModelForSequenceClassification.from_pretrained(model_id)
|
64 |
```
|
65 |
|
66 |
| Factors | Description |
|
|
|
112 |
year = {2022},
|
113 |
url = {https://huggingface.co/Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static},
|
114 |
}
|
115 |
+
```
|