epeters3 commited on
Commit
a84cf66
1 Parent(s): 3d89891

Add readme

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A More Natural PersonaChat
2
+
3
+ ## Dataset Summary
4
+
5
+ This dataset is a true-cased version of the PersonaChat dataset by Zhang et al. (2018).
6
+ The original PersonaChat dataset is all lower case, and has extra space around each
7
+ clause/sentence separating punctuation mark. This version of the dataset has more of a
8
+ natural language look, with sentence capitalization, proper noun capitalization, and
9
+ normalized whitespace. Also, each dialogue turn includes a pool of distractor
10
+ candidate responses, which can be used by a multiple choice regularization loss during
11
+ training.
12
+
13
+ ## Lanugages
14
+
15
+ The text in the dataset is in English (**en**).
16
+
17
+ ## Data Fields
18
+
19
+ Each instance of the dataset represents a conversational utterance that a
20
+ crowdworker made, while pretending to have a certain personality. Each instance has
21
+ these fields:
22
+
23
+ | Field Name | Datatype | Description |
24
+ |---------------|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
25
+ | `conv_id` | int | A unique identifier for the instance's conversation. |
26
+ | `utterance_idx` | int | The index of the instance in the conversation. |
27
+ | `personality` | list of string | Sentences describing the personality of the current speaker. |
28
+ | `history` | list of string | The conversation's utterances so far, alternating between speakers with one utterance per speaker. |
29
+ | `candidates` | list of string | A list of utterances including distractor utterances as well as the true utterance the speaker gave, given their personality and the conversation history thus far. The true utterance is always the last utterance in this list. |
30
+
31
+
32
+ ## Dataset Curation
33
+
34
+ The dataset was sourced from HuggingFace's version of the dataset used in the code for their
35
+ ConvAI 2018 submission, which was described in their [blog article](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313)
36
+ on that submission. This version of the dataset has had extra white spaces removed,
37
+ and a StanfordNLP [stanza](https://stanfordnlp.github.io/stanza/) NLP pipeline was
38
+ used to conduct part-of-speech tagging to identify proper nouns, which were then
39
+ capitalized. The pipeline was also used to conduct sentence segmentation, allowing
40
+ the beginning of sentences to then be capitalized. Finally, all instances of the
41
+ pronoun "I" were capitalized, along with its contractions.