dataset_info:
features:
- name: question_id
dtype: int64
- name: title
dtype: string
- name: question_body
dtype: string
- name: question_type
dtype: string
- name: question_date
dtype: string
splits:
- name: train
num_bytes: 6558
num_examples: 6
- name: test
num_bytes: 12055
num_examples: 14
download_size: 9290
dataset_size: 18613
license: cc
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: staqt
size_categories:
- 1K<n<10K
Dataset Card for "stackoverflow_question_types"
NOTE: the dataset is still currently under annotation
Dataset Description
Recent research has taken a look into leveraging data available from StackOverflow (SO) to train large language models for programming-related tasks. However, users can ask a wide range of questions on stackoverflow; The "stackoverflow question types" is a dataset of manually annotated questions posted on SO with an associated type. Following a previous study, each question was annotated with a type capturing the main concern of the user who posted the question. The questions were annotated with the given types:
- Need to know: Questions regarding the possibility or availability of (doing) something. These questions normally show the lack of knowledge or uncertainty about some aspects of the technology (e.g. the presence of a feature in an API or a language).
- How to do it: Providing a scenario and asking how to implement it (sometimes with a given technology or API).
- Debug/corrective: Dealing with problems in the code under development, such as runtime errors and unexpected behaviour.
- Seeking different solutions: The questioner has a working code yet seeks a different approach to doing the job.
- Conceptual: The question seeks to understand some aspects of programming (with or without using code examples)
- Other: a question related to another aspect of programming, or even non-related to programming.
Remarks
For this dataset, we are mainly interested in questions related to programming. For instance, for this question, the user is "trying to install Python-3.6.5 on a machine that does not have any package manager installed" and is facing issues. Because it's not related to the concept of programming, we would classify it as "other" and not "debugging".
Moreover, we note the following conceptual distinctions between the different categories:
- Need to know: the user asks "is it possible to do x"
- How to do it: the user wants to do "x", knows it's possible, but has no clear idea or solution/doesn't know how to do it -> wants any solution for solving "x".
- Debug: the user wants to do "x", and has a clear idea/solution "y" but it is not working, and is seeking a correction to "y".
- Seeking-different-solution: the user wants to do "x", and has found already a working solution "y", but is seeking an alternative "z".
Sometimes, it's hard to truly understand the users' true intentions; the separating line between each category will be minor and might be subject to interpretation. Naturally, some questions may have multiple concerns (i.e. could correspond to multiple categories). However, this dataset contains mainly questions for which we could assign a clear single category to each question. Currently, all questions annotated are a subset of the stackoverflow_python dataset.
Languages
The currently annotated questions concern posts with the python tag. The questions are written in English.
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
- question_id: the unique id of the post
- question_body: the (HTML) content of the question
- question_type: the assigned category/type/label
- "needtoknow"
- "howto",
- "debug",
- "seeking",
- "conceptual",
- "other"
Data Splits
[More Information Needed]
Dataset Creation
Annotations
Annotation process
Previous research looked into mining natural language-code pairs from stackoverflow. Two notable works yielded the StaQC and ConaLA datasets. Parts of the dataset used a subset of the manual annotations provided by the authors of the papers (available at staqc, and conala). The questions were annotated as belonging to the "how to do it" category.
To ease the annotation procedure, we used the argilla platform and multiple iterations of few-shot training with a SetFit model.
Considerations for Using the Data
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]