Datasets:
Modalities:
Text
Sub-tasks:
masked-language-modeling
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Ekin Akyürek
commited on
Commit
·
380a585
1
Parent(s):
084358f
update readme
Browse files
README.md
CHANGED
@@ -43,15 +43,16 @@ task_ids:
|
|
43 |
- [Dataset Curators](#dataset-curators)
|
44 |
- [Licensing Information](#licensing-information)
|
45 |
- [Citation Information](#citation-information)
|
46 |
-
- [Contributions](#contributions)
|
|
|
47 |
## Dataset Description
|
48 |
-
- **Homepage:** https://huggingface.co/datasets/ekinakyurek/FTRACE
|
49 |
-
- **Repository:** https://github.com/ekinakyurek/influence
|
50 |
-
- **Paper:** https://arxiv.org/pdf/2205.11482.pdf
|
51 |
-
- **Point of Contact:** [email protected]
|
52 |
-
- **Size of downloaded dataset files:** 113.7 MB
|
53 |
-
- **Size of the generated dataset:** 1006.6 MB
|
54 |
-
- **Total amount of disk used:** 1120.3 MB
|
55 |
|
56 |
### Dataset Summary
|
57 |
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
|
|
|
43 |
- [Dataset Curators](#dataset-curators)
|
44 |
- [Licensing Information](#licensing-information)
|
45 |
- [Citation Information](#citation-information)
|
46 |
+
- [Contributions](#contributions)
|
47 |
+
|
48 |
## Dataset Description
|
49 |
+
- **Homepage:** https://huggingface.co/datasets/ekinakyurek/FTRACE
|
50 |
+
- **Repository:** https://github.com/ekinakyurek/influence
|
51 |
+
- **Paper:** https://arxiv.org/pdf/2205.11482.pdf
|
52 |
+
- **Point of Contact:** [email protected]
|
53 |
+
- **Size of downloaded dataset files:** 113.7 MB
|
54 |
+
- **Size of the generated dataset:** 1006.6 MB
|
55 |
+
- **Total amount of disk used:** 1120.3 MB
|
56 |
|
57 |
### Dataset Summary
|
58 |
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
|