Datasets:
File size: 4,069 Bytes
68d2b45 c9e7ece 68d2b45 95e42ce 68d2b45 99a094a 68d2b45 99a094a c9e7ece 869daa1 c9e7ece 869daa1 95e42ce fb900af 68d2b45 c9e7ece 95e42ce fb900af e7694cd 68d2b45 db22ba6 e7694cd db22ba6 e7694cd db22ba6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 100K<n<1M
task_categories:
- conversational
- text-generation
dataset_info:
- config_name: default
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 1921588254
num_examples: 140469
download_size: 764073630
dataset_size: 1921588254
- config_name: grammar_filtered
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: char_name
dtype: string
- name: text
dtype: string
- name: username
dtype: string
- name: reply
dtype: string
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 371438765
num_examples: 27053
download_size: 166606326
dataset_size: 371438765
- config_name: high_confidence
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 949419370.7676569
num_examples: 69403
download_size: 386317057
dataset_size: 949419370.7676569
- config_name: pruned
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 782484734.2032762
num_examples: 57200
download_size: 326987882
dataset_size: 782484734.2032762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: grammar_filtered
data_files:
- split: train
path: grammar_filtered/train-*
- config_name: high_confidence
data_files:
- split: train
path: high_confidence/train-*
- config_name: pruned
data_files:
- split: train
path: pruned/train-*
tags:
- roleplay
- not-for-all-audiences
---
Data scraped from [roleplayerguild](https://www.roleplayerguild.com/) and parsed into prompts with a conversation history and associated character bio. Thanks to an anonymous internet stranger for the original scrape.
As usernames can be associated with multiple character biographies, assignment of characters is a little fuzzy. The `char_confidence` feature reflects how likely this assignment is to be correct. Not all posts in the conversation history necessarily have an associated character name. The column `has_nameless` reflects this.
Each row should fit into 4096 Llama tokens, depending on your prompt format - there's built in slack of 128 tokens + 8 per message.
There are a few configurations available. I *highly* recommend not using the default configuration as it contains a lot of questionable quality data. The options, in order of increasing usefulness:
* `default` - ocean of garbage with some gems
* `high_confidence` - only entries with no nameless posts that are highly likely to be assigned a correct `char_name`/`bio`
* `pruned` - Further filtered from `high_confidence` to remove common types of junk replies
* `grammar_filtered` - run through a grammar checker to remove rows with too many mistakes
The `grammar_filtered` configuration is almost certainly what you want to be using. (Unless you want to do your own processing and filtering.) |