Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ When researching LLMs [tokenized similarly to GPT-2/GPT-3](https://platform.open
|
|
25 |
- Fast searches for common phrases containing particular tokens or substrings (and in particular sequence positions).
|
26 |
- Showing the effects of training set n-gram frequency.
|
27 |
|
28 |
-
The authors used this dataset to show that sparse auto-encoders are biased toward reconstructing the most common n-grams.
|
29 |
|
30 |
## Loading the Dataset
|
31 |
|
|
|
25 |
- Fast searches for common phrases containing particular tokens or substrings (and in particular sequence positions).
|
26 |
- Showing the effects of training set n-gram frequency.
|
27 |
|
28 |
+
The authors (Thomas Dooms and Dan Wilhelm) used this dataset to show that sparse auto-encoders are biased toward reconstructing the most common n-grams.
|
29 |
|
30 |
## Loading the Dataset
|
31 |
|