Update README.md
Browse files
README.md
CHANGED
@@ -98,7 +98,14 @@ widget:
|
|
98 |
|
99 |
# BERT base trained on 500k Arabic NLI triplets
|
100 |
|
101 |
-
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
## Model Details
|
104 |
|
|
|
98 |
|
99 |
# BERT base trained on 500k Arabic NLI triplets
|
100 |
|
101 |
+
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02).
|
102 |
+
It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search,
|
103 |
+
paraphrase mining, text classification, clustering, and more.
|
104 |
+
Most of the contents of this card are auto-generated. The dataset used was normalized before training, so it is recommended to normalize your queries and documents:
|
105 |
+
```python
|
106 |
+
from unicodedata import normalize
|
107 |
+
query_n = normalize('NFKC', query)
|
108 |
+
```
|
109 |
|
110 |
## Model Details
|
111 |
|