Update README.md
Browse files
README.md
CHANGED
@@ -2628,7 +2628,7 @@ The model is further trained on Jina AI's collection of more than 400 millions o
|
|
2628 |
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
|
2629 |
|
2630 |
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
|
2631 |
-
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search
|
2632 |
|
2633 |
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
|
2634 |
Additionally, we provide the following embedding models:
|
|
|
2628 |
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
|
2629 |
|
2630 |
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
|
2631 |
+
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
|
2632 |
|
2633 |
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
|
2634 |
Additionally, we provide the following embedding models:
|