Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
Ekin Akyürek commited on
Commit
c269187
·
1 Parent(s): 25e807c

update readme

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -47,15 +47,16 @@ task_ids:
47
 
48
  ## Dataset Description
49
 
50
- - **Homepage:** [https://huggingface.co/datasets/ekinakyurek/FTRACE](https://huggingface.co/datasets/ekinakyurek/FTRACE)
51
- - **Repository:** [https://github.com/ekinakyurek/influence](https://github.com/ekinakyurek/influence)
52
- - **Paper:** [https://arxiv.org/pdf/2205.11482.pdf](https://arxiv.org/pdf/2205.11482.pdf)
53
- - **Point of Contact:** [email protected]
54
- - **Size of downloaded dataset files:** 113.7 MB
55
- - **Size of the generated dataset:** 1006.6 MB
56
- - **Total amount of disk used:** 1120.3 MB
57
 
58
  ### Dataset Summary
 
59
  FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
60
  ### Supported Tasks and Leaderboards
61
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
47
 
48
  ## Dataset Description
49
 
50
+ - **Homepage:** https://huggingface.co/datasets/ekinakyurek/ftrace
51
+ - **Repository:** https://github.com/ekinakyurek/influence
52
+ - **Paper:** https://arxiv.org/pdf/2205.11482.pdf
53
+ - **Point of Contact:** [email protected]
54
+ - **Size of downloaded dataset files:** 113.7 MB
55
+ - **Size of the generated dataset:** 1006.6 MB
56
+ - **Total amount of disk used:** 1120.3 MB
57
 
58
  ### Dataset Summary
59
+ [PAPER]
60
  FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
61
  ### Supported Tasks and Leaderboards
62
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)