maveriq commited on
Commit
7a2793a
1 Parent(s): 9688923

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -1
README.md CHANGED
@@ -20,7 +20,45 @@ configs:
20
  data_files:
21
  - split: train
22
  path: data/train-*
 
 
 
 
 
 
 
23
  ---
 
 
 
 
24
  # Dataset Card for "medi"
25
 
26
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  data_files:
21
  - split: train
22
  path: data/train-*
23
+ task_categories:
24
+ - feature-extraction
25
+ language:
26
+ - en
27
+ pretty_name: Multitask Embeddings Data with Instructions (MEDI)
28
+ size_categories:
29
+ - 1M<n<10M
30
  ---
31
+
32
+ # Disclaimer
33
+ I am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the [website](https://instructor-embedding.github.io/)
34
+
35
  # Dataset Card for "medi"
36
 
37
+ The MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.
38
+
39
+ If you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at https://huggingface.co/datasets/sentence-transformers/embedding-training-data.
40
+
41
+ # Citation Information
42
+
43
+ ```
44
+ @inproceedings{INSTRUCTOR,
45
+ title={One Embedder, Any Task: Instruction-Finetuned Text Embeddings},
46
+ author={Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu},
47
+ url={https://arxiv.org/abs/2212.09741},
48
+ year={2022},
49
+ }
50
+
51
+ @inproceedings{wang2022super,
52
+ title={Super-naturalinstructions: generalization via declarative instructions on 1600+ tasks},
53
+ author={Wang, Yizhong and Mishra, Swaroop and Alipoormolabashi, Pegah and Kordi, Yeganeh and Mirzaei, Amirreza and Arunkumar, Anjana and Ashok, Arjun and Dhanasekaran, Arut Selvan and Naik, Atharva and Stap, David and others},
54
+ year={2022},
55
+ organization={EMNLP}
56
+ }
57
+
58
+ @article{petroni2020kilt,
59
+ title={KILT: a benchmark for knowledge intensive language tasks},
60
+ author={Petroni, Fabio and Piktus, Aleksandra and Fan, Angela and Lewis, Patrick and Yazdani, Majid and De Cao, Nicola and Thorne, James and Jernite, Yacine and Karpukhin, Vladimir and Maillard, Jean and others},
61
+ journal={arXiv preprint arXiv:2009.02252},
62
+ year={2020}
63
+ }
64
+ ```