Datasets:
language:
- en
license: cc-by-nc-4.0
size_categories:
- 100K<n<1M
task_categories:
- conversational
- text-generation
dataset_info:
- config_name: default
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 1921588254
num_examples: 140469
download_size: 764073630
dataset_size: 1921588254
- config_name: grammar_filtered
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: char_name
dtype: string
- name: text
dtype: string
- name: username
dtype: string
- name: reply
dtype: string
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 371438765
num_examples: 27053
download_size: 166606326
dataset_size: 371438765
- config_name: high_confidence
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 949419370.7676569
num_examples: 69403
download_size: 386317057
dataset_size: 949419370.7676569
- config_name: pruned
features:
- name: username
dtype: string
- name: char_name
dtype: string
- name: bio
dtype: string
- name: context
list:
- name: text
dtype: string
- name: username
dtype: string
- name: char_name
dtype: string
- name: reply
dtype: string
- name: has_nameless
dtype: bool
- name: char_confidence
dtype: float64
splits:
- name: train
num_bytes: 782484734.2032762
num_examples: 57200
download_size: 326987882
dataset_size: 782484734.2032762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: grammar_filtered
data_files:
- split: train
path: grammar_filtered/train-*
- config_name: high_confidence
data_files:
- split: train
path: high_confidence/train-*
- config_name: pruned
data_files:
- split: train
path: pruned/train-*
tags:
- roleplay
- not-for-all-audiences
Data scraped from roleplayerguild and parsed into prompts with a conversation history and associated character bio. Thanks to an anonymous internet stranger for the original scrape.
As usernames can be associated with multiple character biographies, assignment of characters is a little fuzzy. The char_confidence
feature reflects how likely this assignment is to be correct. Not all posts in the conversation history necessarily have an associated character name. The column has_nameless
reflects this.
Each row should fit into 4096 Llama tokens, depending on your prompt format - there's built in slack of 128 tokens + 8 per message.
There are a few configurations available. I highly recommend not using the default configuration as it contains a lot of questionable quality data. The options, in order of increasing usefulness:
default
- ocean of garbage with some gemshigh_confidence
- only entries with no nameless posts that are highly likely to be assigned a correctchar_name
/bio
pruned
- Further filtered fromhigh_confidence
to remove common types of junk repliesgrammar_filtered
- run through a grammar checker to remove rows with too many mistakes
The grammar_filtered
configuration is almost certainly what you want to be using. (Unless you want to do your own processing and filtering.)