yjoonjang commited on
Commit
7656f9a
β€’
1 Parent(s): 56d1ace

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -172
README.md CHANGED
@@ -1,199 +1,84 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
11
 
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
 
167
- #### Software
168
 
169
- [More Information Needed]
 
 
170
 
171
- ## Citation [optional]
 
 
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
174
 
175
- **BibTeX:**
 
 
 
 
 
 
 
 
176
 
177
- [More Information Needed]
 
 
 
 
 
 
178
 
179
- **APA:**
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
 
 
 
 
 
 
 
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ datasets:
5
+ - nlpai-lab/ko-triplet-v1.0
6
+ language:
7
+ - ko
8
+ - en
9
+ base_model:
10
+ - intfloat/multilingual-e5-large
11
+ pipeline_tag: sentence-similarity
12
  ---
13
 
14
+ # KoE5
15
 
16
+ **KoE5: ν•œκ΅­μ–΄ μž„λ² λ”© μ„±λŠ₯ ν–₯상을 μœ„ν•œ μƒˆλ‘œμš΄ 데이터셋 및 λͺ¨λΈ** - μž₯μ˜μ€€, μ†μ€€μ˜, λ°•μ°¬μ€€, 이병ꡬ, μ΄νƒœλ―Ό, μž„ν¬μ„, HCLT 2024 Oral accepted
17
 
18
+ This model is fine-tuned model based on multilingual-e5-large with [ko-triplet-v1.0](nlpai-lab/ko-triplet-v1.0)
19
 
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ## Uses
22
 
23
+ ### Direct Usage (Sentence Transformers)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ First install the Sentence Transformers library:
26
 
27
+ ```bash
28
+ pip install -U sentence-transformers
29
+ ```
30
 
31
+ Then you can load this model and run inference.
32
+ ```python
33
+ from sentence_transformers import SentenceTransformer
34
 
35
+ # Download from the πŸ€— Hub
36
+ model = SentenceTransformer("nlpai-lab/KoE5")
37
 
38
+ # Run inference
39
+ sentences = [
40
+ 'query: ν—Œλ²•κ³Ό 법원쑰직법은 μ–΄λ–€ 방식을 톡해 기본ꢌ 보μž₯ λ“±μ˜ λ‹€μ–‘ν•œ 법적 λͺ¨μƒ‰μ„ κ°€λŠ₯ν•˜κ²Œ ν–ˆμ–΄',
41
+ 'passage: 4. μ‹œμ‚¬μ κ³Ό κ°œμ„ λ°©ν–₯ μ•žμ„œ μ‚΄νŽ΄λ³Έ 바와 같이 우리 ν—Œλ²•κ³Ό r법원쑰직 법」은 λŒ€λ²•μ› ꡬ성을 λ‹€μ–‘ν™”ν•˜μ—¬ 기본ꢌ 보μž₯κ³Ό 민주주의 확립에 μžˆμ–΄ 닀각적인 법적 λͺ¨μƒ‰μ„ κ°€λŠ₯ν•˜κ²Œ ν•˜λŠ” 것을 κ·Όλ³Έ κ·œλ²”μœΌλ‘œ ν•˜κ³  μžˆλ‹€. λ”μš±μ΄ ν•©μ˜μ²΄λ‘œμ„œμ˜ λŒ€λ²•μ› 원리λ₯Ό μ±„νƒν•˜κ³  μžˆλŠ” 것 μ—­μ‹œ κ·Έ κ΅¬μ„±μ˜ 닀양성을 μš”μ²­ν•˜λŠ” κ²ƒμœΌλ‘œ ν•΄μ„λœλ‹€. 이와 같은 κ΄€μ μ—μ„œ λ³Ό λ•Œ ν˜„μ§ 법원μž₯κΈ‰ κ³ μœ„λ²•κ΄€μ„ μ€‘μ‹¬μœΌλ‘œ λŒ€λ²•μ›μ„ κ΅¬μ„±ν•˜λŠ” 관행은 κ°œμ„ ν•  ν•„μš”κ°€ μžˆλŠ” κ²ƒμœΌλ‘œ 보인닀.',
42
+ 'passage: β–‘ μ—°λ°©ν—Œλ²•μž¬νŒμ†ŒλŠ” 2001λ…„ 1μ›” 24일 5:3의 λ‹€μˆ˜κ²¬ν•΄λ‘œ γ€Œλ²•μ›μ‘°μ§λ²•γ€ 제169μ‘° 제2문이 ν—Œλ²•μ— ν•©μΉ˜λœλ‹€λŠ” νŒκ²°μ„ λ‚΄λ ΈμŒ β—‹ 5인의 λ‹€μˆ˜ μž¬νŒκ΄€μ€ μ†Œμ†‘κ΄€κ³„μΈμ˜ 인격ꢌ 보호, κ³΅μ •ν•œ 절차의 보μž₯κ³Ό 방해받지 μ•ŠλŠ” 법과 진싀 발견 등을 근거둜 ν•˜μ—¬ ν…”λ ˆλΉ„μ „ μ΄¬μ˜μ— λŒ€ν•œ μ ˆλŒ€μ μΈ κΈˆμ§€λ₯Ό ν—Œλ²•μ— ν•©μΉ˜ν•˜λŠ” κ²ƒμœΌλ‘œ λ³΄μ•˜μŒ β—‹ κ·ΈλŸ¬λ‚˜ λ‚˜λ¨Έμ§€ 3인의 μž¬νŒκ΄€μ€ ν–‰μ •λ²•μ›μ˜ μ†Œμ†‘μ ˆμ°¨λŠ” νŠΉλ³„ν•œ 인격ꢌ 보호의 이읡도 μ—†μœΌλ©°, ν…”λ ˆλΉ„μ „ 곡개주의둜 인해 법과 진싀 발견의 과정이 μ–Έμ œλ‚˜ μœ„νƒœλ‘­κ²Œ λ˜λŠ” 것은 μ•„λ‹ˆλΌλ©΄μ„œ λ°˜λŒ€μ˜κ²¬μ„ μ œμ‹œν•¨ β—‹ μ™œλƒν•˜λ©΄ ν–‰μ •λ²•μ›μ˜ μ†Œμ†‘μ ˆμ°¨μ—μ„œλŠ” μ†Œμ†‘λ‹Ήμ‚¬μžκ°€ 개인적으둜 직접 심리에 μ°Έμ„ν•˜κΈ°λ³΄λ‹€λŠ” λ³€ν˜Έμ‚¬κ°€ μ°Έμ„ν•˜λŠ” κ²½μš°κ°€ 많으며, μ‹¬λ¦¬λŒ€μƒλ„ μ‚¬μ‹€λ¬Έμ œκ°€ μ•„λ‹Œ 법λ₯ λ¬Έμ œκ°€ λŒ€λΆ€λΆ„μ΄κΈ° λ•Œλ¬Έμ΄λΌλŠ” κ²ƒμž„ β–‘ ν•œνŽΈ, μ—°λ°©ν—Œλ²•μž¬νŒμ†ŒλŠ” γ€Œμ—°λ°©ν—Œλ²•μž¬νŒμ†Œλ²•γ€(Bundesverfassungsgerichtsgesetz: BVerfGG) 제17a쑰에 따라 μ œν•œμ μ΄λ‚˜λ§ˆ μž¬νŒμ— λŒ€ν•œ 방솑을 ν—ˆμš©ν•˜κ³  있음 β—‹ γ€Œμ—°λ°©ν—Œλ²•μž¬νŒμ†Œλ²•γ€ 제17μ‘°μ—μ„œ γ€Œλ²•μ›μ‘°μ§λ²•γ€ 제14절 내지 제16절의 κ·œμ •μ„ μ€€μš©ν•˜λ„λ‘ ν•˜κ³  μžˆμ§€λ§Œ, λ…ΉμŒμ΄λ‚˜ μ΄¬μ˜μ„ ν†΅ν•œ μž¬νŒκ³΅κ°œμ™€ κ΄€λ ¨ν•˜μ—¬μ„œλŠ” γ€Œλ²•μ›μ‘°μ§λ²•γ€κ³Ό λ‹€λ₯Έ λ‚΄μš©μ„ κ·œμ •ν•˜κ³  있음',
43
+ ]
44
+ embeddings = model.encode(sentences)
45
+ print(embeddings.shape)
46
+ # [3, 1024]
47
 
48
+ # Get the similarity scores for the embeddings
49
+ similarities = model.similarity(embeddings, embeddings)
50
+ print(similarities)
51
+ # tensor([[1.0000, 0.6721, 0.3897],
52
+ # [0.6721, 1.0000, 0.3740],
53
+ # [0.3897, 0.3740, 1.0000]])
54
+ ```
55
 
56
+ ## FAQ
57
 
58
+ **1. Do I need to add the prefix "query: " and "passage: " to input texts?**
59
 
60
+ Yes, this is how the model is trained, otherwise you will see a performance degradation.
61
 
62
+ Here are some rules of thumb:
63
+ - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
64
 
65
+ - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
66
 
67
+ - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
68
 
69
+ ## Citation
70
 
71
+ If you find our paper or models helpful, please consider cite as follows:
72
 
73
+ ```
74
+ @article{wang2024multilingual,
75
+ title={Multilingual E5 Text Embeddings: A Technical Report},
76
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
77
+ journal={arXiv preprint arXiv:2402.05672},
78
+ year={2024}
79
+ }
80
+ ```
81
 
82
+ ## Limitations
83
 
84
+ Long texts will be truncated to at most 512 tokens.