|
--- |
|
license: mit |
|
datasets: |
|
- tsac |
|
language: |
|
- ar |
|
--- |
|
|
|
This is a converted version of [Instadeep's](https://huggingface.co/InstaDeepAI) [TunBERT](https://github.com/instadeepai/tunbert/) from nemo to safetensors. |
|
|
|
Make sure to read the original model [licence](https://github.com/instadeepai/tunbert/blob/main/LICENSE) |
|
<details> |
|
<summary>architectural changes </summary> |
|
|
|
## original model head |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/b-uXLwsi4n1Tc7-OtHe9b.png) |
|
|
|
|
|
## this model head |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/xG-tOQscrvxb4wQm_2n-r.png) |
|
|
|
</details> |
|
|
|
|
|
|
|
# how to load the model |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT") |
|
model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True) |
|
``` |
|
|
|
|
|
# how to use the model |
|
```python |
|
text = "[insert text here]" |
|
inputs = tokenizer(text,return_tensors='pt') |
|
output = model(**inputs) |
|
``` |
|
|
|
**IMPORTANT** : |
|
* Make sure to enable `trust_remote_code=True` |
|
* Avoid using the pipeline method |
|
|