SaveBertAndGpt
commited on
Commit
ยท
3d45e31
1
Parent(s):
61f980a
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: "en" # Example: en
|
3 |
+
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
|
4 |
+
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
|
5 |
+
tags:
|
6 |
+
- question-answering # Example: audio
|
7 |
+
- scene-elaboration # Example: automatic-speech-recognition
|
8 |
+
---
|
9 |
+
|
10 |
+
This is the T5-11B model described in our paper DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/)
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
# What is DREAM ๐ญ?
|
15 |
+
DREAM can be used to:
|
16 |
+
|
17 |
+
* Build scene elaborations in a dataset-neutral way ๐ผ๏ธ
|
18 |
+
|
19 |
+
* ๐ Improve QA performance across different end-tasks and on different models ๐
|
20 |
+
|
21 |
+
When people ๐งโ๐ป answer questions about a specific situation, cognitive science ๐ง suggests that they form a mental picture ๐ผ๏ธ of that situation. Will language models ๐ค answer such questions more accurately if provided with additional details about the question situation ๐ผ๏ธ ?
|
22 |
+
|
23 |
+
We train a new model, DREAM ๐ญ , to answer questions that elaborate the scenes ๐ผ๏ธ that situated questions are about, and then provide those elaborations as additional context ๐ to a QA model ๐ค . Our results show that DREAM ๐ญ is able to create more โ
accurate, โ
useful, and โ
consistent scene elaborations than a representative
|
24 |
+
SOTA ๐, zero-shot model (Macaw ๐ฆ ).
|
25 |
+
|
26 |
+
Remarkably, using DREAMโs ๐ญ scene elaborations ๐ผ๏ธ as additional context improves๐ the answer accuracy across different downstream QA systems ๐ค and on different end-tasks ๐ (including beyond that obtainable by further fine-tuning the QA system on DREAMโs training data ๐). Our approach is question-agnostic ๐ซ, leaves end-task QA models unchanged โจ, and thus easily portable to other QA models ๐, suggesting exciting opportunities for further improving and exploiting scene elaborations to better solve new problems. ๐ก
|
27 |
+
|
28 |
+
We invite you to try out ๐ญDREAM ๐ญ for your own application!
|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
+
# How to use DREAM ๐ญ?
|
33 |
+
We provide a quick example of how you can try out DREAM with just a few lines of code:
|
34 |
+
```
|
35 |
+
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
36 |
+
>>> model = AutoModelForSeq2SeqLM.from_pretrained("SaveBertAndGpt/DREAM")
|
37 |
+
|
38 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("t5-11b")
|
39 |
+
>>> input_string = "$answer$ ; $question$ = [SITUATION] hitting someones car in the drive thru on purpose. [QUERY] rot"
|
40 |
+
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
|
41 |
+
>>> output = model.generate(input_ids, max_length=200)
|
42 |
+
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
|
43 |
+
["$answer$ = It's wrong to damage other people's property."]
|
44 |
+
```
|
45 |
+
|
46 |
+
As discussed in our paper, DREAM supports the following possible dimensions for each input situation S:
|
47 |
+
```
|
48 |
+
1. M : motivation of character(s) before S.
|
49 |
+
2. E: emotion of character(s) after S.
|
50 |
+
3. ROT : general Rule of Thumb (ROT) about whether action described in S is socially acceptable or not (also known as social norm).
|
51 |
+
4. Con: likely consequence of action in S.
|
52 |
+
```
|
53 |
+
To get DREAM's output for these dimensions, use the corresponding terms below after the "[QUERY] " tag in your input string:
|
54 |
+
```
|
55 |
+
motivation
|
56 |
+
emotion
|
57 |
+
rot
|
58 |
+
consequence
|
59 |
+
```
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
# More details about DREAM ๐ญ ...
|
64 |
+
For more details about DREAM, please refer to our:
|
65 |
+
* ๐Paper: https://aclanthology.org/2022.naacl-main.82/
|
66 |
+
* ๐ปDataset & Model: https://github.com/allenai/dream/
|
67 |
+
|
68 |
+
For additional instructions about using the DREAM model and sample commands, please refer to https://github.com/allenai/dream/blob/main/model/README_DREAM_model.md.
|