NCHS
/

thoppe commited on
Commit
4b27dba
·
1 Parent(s): 2f3b1e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -0
README.md CHANGED
@@ -1,3 +1,124 @@
1
  ---
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - text-classification
6
  license: apache-2.0
7
+ widget:
8
+ - text: "sdfsdfa"
9
+ example_title: "Gibberish"
10
+ - text: "idkkkkk"
11
+ example_title: "Uncertainty"
12
+ - text: "Because you asked"
13
+ example_title: "Refusal"
14
+ - text: "Necessity"
15
+ example_title: "High-risk"
16
+ - text: "My job went remote and I needed to take care of my kids"
17
+ example_title: "Valid"
18
  ---
19
+
20
+ # SANDS
21
+ _Semi-Automated Non-response Detection for Surveys model (uncased)_
22
+
23
+ Non-response detection designed to be used for open-ended survey responses in conjunction with human reviewers.
24
+
25
+ ## Model Details
26
+
27
+ Model Description: This model is a fine-tuned version of the supervised SimCSE BERT base uncased model. It was introduced at [AAPOR](https://www.aapor.org/) 2022 at the talk _Toward a Semi-automated item nonresponse detector model for open-response data_. The model is uncased, so it does not treats `important`, `Important`, and `ImPoRtAnT` the same.
28
+
29
+ * Developed by: [National Center for Health Statistics](https://www.cdc.gov/nchs/index.htm), Centers for Disease Control and Prevention
30
+ * Model Type: Text Classification
31
+ * Language(s): English
32
+ * License: Apache-2.0
33
+
34
+ Parent Model: For more details about SimCSE, we encourage users to check out the SimCSE [Github repository](https://github.com/princeton-nlp/SimCSE), [arXiv publication](https://arxiv.org/pdf/2104.08821.pdf), and the [base model](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) on HuggingFace.
35
+
36
+ ## How to Get Started with the Model
37
+
38
+ ### Example of classification of a set of responses:
39
+
40
+ ```python
41
+
42
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
43
+ import torch
44
+ import pandas as pd
45
+
46
+ # Load the model
47
+ model_location = "pretrained/"
48
+ model = AutoModelForSequenceClassification.from_pretrained(model_location)
49
+ tokenizer = AutoTokenizer.from_pretrained(model_location)
50
+
51
+ # Create example responses to test
52
+ responses = [
53
+ "sdfsdfa",
54
+ "idkkkkk",
55
+ "Because you asked",
56
+ "Necessity",
57
+ "My job went remote and I needed to take care of my kids",
58
+ ]
59
+
60
+ # Run the model and compute a score for each response
61
+ with torch.no_grad():
62
+ tokens = tokenizer(responses, padding=True, truncation=True, return_tensors="pt")
63
+ output = model(**tokens)
64
+ scores = torch.softmax(output.logits, dim=1).numpy()
65
+
66
+ # Display the scores in a table
67
+ columns = ["Gibberish", "Uncertainty", "Refusal", "High-risk", "Valid"]
68
+ df = pd.DataFrame(scores, columns=columns)
69
+ df.index.name = "Response"
70
+ print(df)
71
+ ```
72
+
73
+ |Response| Gibberish| Uncertainty| Refusal| High-risk| Valid|
74
+ |--------|---------------|-----------------|-----------|-----------------|-----------|
75
+ |sdfsdfa| 0.998| 0.000| 0.000| 0.000| 0.000|
76
+ |idkkkkk| 0.002| 0.995| 0.001| 0.001| 0.001|
77
+ |Because you asked| 0.001| 0.001| 0.976| 0.006| 0.014|
78
+ |Necessity| 0.001| 0.001| 0.002| 0.980| 0.016|
79
+ |My job went remote and I needed to take care of my kids| 0.000| 0.000| 0.000| 0.000| 1.000|
80
+
81
+ ## Uses
82
+
83
+ ### Direct Uses
84
+
85
+ This model is intended to be used on survey responses for data cleaning to help researchers filter out non-responsive responses or junk responses to aid in research and analysis. The model will return a score for a response in 5 different categories: Gibberish, Refusal, Uncertainty, High Risk, and Valid as a probability vector that sums to 1.
86
+
87
+ ### Response types
88
+
89
+ + **Gibberish**: Nonsensical response where the respondent entered text without regard for English syntax. Examples: `ksdhfkshgk` and `sadsadsadsadsadsadsad`
90
+ + **Refusal**: Responses that with valid English but are either a direct refusal to answer the question asked or a response that provides no contextual relationship to the question asked. Examples: `Because` or `Meow`.
91
+ + **Uncertainty**: Responses where the respondent does not understand the question, does not know the answer to the question, or does not know how to respond to the question. Examples: `I dont know` or `unsure what you are asking`.
92
+ + **High-Risk**: Responses that may be valid depending on the context and content of the question. These responses require human subject matter expertise to classify as a valid response or not. Examples: `Necessity` or `Just isolating`
93
+ + **Valid**: Responses that answer the question at hand and provide an insight to the respondents thought on the subject matter of the question. Examples: `COVID began for me when my children’s school went online and I needed to stay home to watch them` or `staying home, avoiding crowds, still wear masks`
94
+
95
+ ## Misuses and Out-of-scope Use
96
+
97
+ The model has been trained to identify survey non-response in open ended responses, or junk responses , where the respondent taking the survey has given a response but their answer does not respond to the question at hand or providing any meaningful insight such as `meow`, `ksdhfkshgk`, or `idk`. The model was finetuned on 3,000 labeled open-ended responses to web probes on questions relating to the COVID-19 pandemic gathered from the [Research and Development Survey or RANDS](https://www.cdc.gov/nchs/rands/index.htm) conducted by the Division of Research and Methodology at the National Center for Health Statistics. Web probes are questions implementing probing techniques from cognitive interviewing for use in survey question design and are different than traditional open-ended survey questions. The context of our labeled responses limited in focus on both COVID and health responses, so responses outside this scope may notice a drop in performance.
98
+
99
+ The responses are also trained from both web and phone based open-ended probes. There may be limitations in model effectiveness with more traditional open ended survey questions with responses provided in other mediums.
100
+
101
+ This model does not assess the factual accuracy of responses or filter out responses with different demographic biases. It was not trained to be factual of people or events and so using the model for such classification is out of scope for the abilities of the model.
102
+
103
+ We did not train the model to recognize non-response in any language other than English. Responses in languages other than English are out of scope and the model will perform poorly. Any correct classifications are a result of the base SimCSE or Bert Models.
104
+
105
+
106
+ ## Risks, Limitations, and Biases
107
+
108
+ As the model was fine-tuned from SimCSE, itself fine-tuned from BERT, it will reproduce all biases inherent in these base models. Due to tokenization, the model may incorrectly classify typos, especially in acronyms. For example: `LGBTQ` is valid, while `LBGTQ` is classified as gibberish.
109
+
110
+ Some examples of refusal responses also can appear to be valid as they did not occur in our limited training set. For example, `none of your business` is currently returned as valid as it was not a response seen in the first two rounds of RANDS during COVID 19.
111
+
112
+ ## Training
113
+
114
+ #### Training Data
115
+
116
+ The model was finetuned on 3,000 labeled open-ended responses from [RANDS during COVID 19 Rounds 1 and 2](https://www.cdc.gov/nchs/rands/index.htm). The base SimCSE BERT model was trained on BookCorpus and English Wikipedia.
117
+
118
+ #### Training procedure
119
+
120
+ + Learning rate: 5e-5
121
+ + Batch size: 16
122
+ + Number training epochs: 4
123
+ + Base Model pooling dimension: 768
124
+ + Number of labels: 5