File size: 2,329 Bytes
9688923
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a2793a
 
 
 
 
 
 
9688923
7a2793a
 
 
 
9688923
 
7a2793a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
dataset_info:
  features:
  - name: query
    sequence: string
  - name: pos
    sequence: string
  - name: neg
    sequence: string
  - name: task_name
    dtype: string
  splits:
  - name: train
    num_bytes: 2572523114
    num_examples: 1435000
  download_size: 1232020798
  dataset_size: 2572523114
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- feature-extraction
language:
- en
pretty_name: Multitask Embeddings Data with Instructions (MEDI)
size_categories:
- 1M<n<10M
---

# Disclaimer
I am not the author of the dataset or the paper. I have just uploaded it for ease of availability. For all information please refer to the [website](https://instructor-embedding.github.io/)

# Dataset Card for "medi"

The MEDI data consists of a collection of 330 datasets from Super-NI(Super-NaturalInstructions), sentence-transformer embedding training data, and KILT, spanning a wide range of domains and tasks.

If you use the dataset, please cite the following papers including Su et al., 2022, Wang et al., 2022, Petroni et al., 2021 and sentence transformer embedding training data at https://huggingface.co/datasets/sentence-transformers/embedding-training-data.

# Citation Information

```
@inproceedings{INSTRUCTOR,
  title={One Embedder, Any Task: Instruction-Finetuned Text Embeddings},
  author={Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu},
  url={https://arxiv.org/abs/2212.09741},
  year={2022},
}

@inproceedings{wang2022super,
  title={Super-naturalinstructions: generalization via declarative instructions on 1600+ tasks},
  author={Wang, Yizhong and Mishra, Swaroop and Alipoormolabashi, Pegah and Kordi, Yeganeh and Mirzaei, Amirreza and Arunkumar, Anjana and Ashok, Arjun and Dhanasekaran, Arut Selvan and Naik, Atharva and Stap, David and others},
  year={2022},
  organization={EMNLP}
}

@article{petroni2020kilt,
  title={KILT: a benchmark for knowledge intensive language tasks},
  author={Petroni, Fabio and Piktus, Aleksandra and Fan, Angela and Lewis, Patrick and Yazdani, Majid and De Cao, Nicola and Thorne, James and Jernite, Yacine and Karpukhin, Vladimir and Maillard, Jean and others},
  journal={arXiv preprint arXiv:2009.02252},
  year={2020}
}
```