bwang0911 commited on
Commit
92252ab
1 Parent(s): 12ab0d4

chore: update readme

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -21528,7 +21528,7 @@ model-index:
21528
  </p>
21529
 
21530
  <p align="center">
21531
- <b>Jina Embedding V3: A Multilingual Multi-Task Embedding Model</b>
21532
  </p>
21533
 
21534
  ## Quick Start
@@ -21541,8 +21541,8 @@ The easiest way to start using `jina-embeddings-v3` is with the [Jina Embedding
21541
 
21542
  `jina-embeddings-v3` is a **multilingual multi-task text embedding model** designed for a variety of NLP applications.
21543
  Based on the [Jina-XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation),
21544
- this model supports [Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2104.09864) to handle long input sequences up to **8192 tokens**.
21545
- Additionally, it features 5 [LoRA](https://arxiv.org/abs/2106.09685) adapters to generate task-specific embeddings efficiently.
21546
 
21547
  ### Key Features:
21548
  - **Extended Sequence Length:** Supports up to 8192 tokens with RoPE.
@@ -21560,11 +21560,6 @@ While the foundation model supports 89 languages, we've focused our tuning effor
21560
  Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Norwegian, Polish, Portuguese, Romanian,
21561
  Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu,** and **Vietnamese.**
21562
 
21563
-
21564
- ## Data & Parameters
21565
-
21566
- The data and training details are described in the technical report (coming soon).
21567
-
21568
  ## Usage
21569
 
21570
  **<details><summary>Apply mean pooling when integrating the model.</summary>**
 
21528
  </p>
21529
 
21530
  <p align="center">
21531
+ <b>jina-embeddings-v3: Multilingual Embeddings With Task LoRA</b>
21532
  </p>
21533
 
21534
  ## Quick Start
 
21541
 
21542
  `jina-embeddings-v3` is a **multilingual multi-task text embedding model** designed for a variety of NLP applications.
21543
  Based on the [Jina-XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation),
21544
+ this model supports Rotary Position Embeddings to handle long input sequences up to **8192 tokens**.
21545
+ Additionally, it features 5 LoRA adapters to generate task-specific embeddings efficiently.
21546
 
21547
  ### Key Features:
21548
  - **Extended Sequence Length:** Supports up to 8192 tokens with RoPE.
 
21560
  Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Norwegian, Polish, Portuguese, Romanian,
21561
  Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu,** and **Vietnamese.**
21562
 
 
 
 
 
 
21563
  ## Usage
21564
 
21565
  **<details><summary>Apply mean pooling when integrating the model.</summary>**