File size: 4,449 Bytes
ef19905
8ce3030
 
 
 
 
 
ef19905
8ce3030
 
 
ef19905
8ce3030
 
 
fdd5b95
8ce3030
 
 
 
 
 
 
 
 
 
 
 
 
66119dd
 
 
 
8ce3030
66119dd
8ce3030
 
66119dd
8ce3030
66119dd
8ce3030
66119dd
 
8ce3030
66119dd
 
8ce3030
66119dd
 
8ce3030
 
 
66119dd
8ce3030
66119dd
 
 
8ce3030
 
66119dd
8ce3030
66119dd
8ce3030
 
 
 
 
 
 
 
66119dd
8ce3030
 
 
66119dd
8ce3030
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
language: 
  - en
tags:
- conversational
- dialogue
- response generation
license: apache-2.0
datasets:
- allenai/soda
- allenai/prosocial-dialog
---

# Model Card for 🧑🏻‍🚀COSMO

🧑🏻‍🚀COSMO is a conversation agent with greater generalizability on both in- and out-of-domain chitchat datasets (e.g., DailyDialog, BlendedSkillTalk). It is trained on two datasets: SODA and ProsocialDialog. COSMO is especially aiming to model natural human conversations. It can accept situation descriptions as well as instructions on what role it should play in the situation.

## Model Description
- **Repository:** [Code](https://github.com/skywalker023/sodaverse)
- **Paper:** [SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization](https://arxiv.org/abs/2212.10465)
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])

## Model Training

🧑🏻‍🚀COSMO is trained on our two recent datasets: 🥤[SODA](https://huggingface.co/datasets/allenai/soda) and [Prosocial Dialog](https://huggingface.co/datasets/allenai/prosocial-dialog).
The backbone model of COSMO is the [lm-adapted T5](https://huggingface.co/google/t5-xl-lm-adapt).

### How to use

> 💡 <b>Note:</b> The HuggingFace inference API for Cosmo is not working correctly, we gently guide you to [our repository](https://hyunw.kim/sodaverse) to try out the demo code!

Below is a simple code snippet to get Cosmo running :)

```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("allenai/cosmo-xl")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/cosmo-xl").to(device)

def set_input(situation_narrative, role_instruction, conversation_history):
    input_text = " <turn> ".join(conversation_history)

    if role_instruction != "":
        input_text = "{} <sep> {}".format(role_instruction, input_text)

    if situation_narrative != "":
        input_text = "{} <sep> {}".format(situation_narrative, input_text)

    return input_text

def generate(situation_narrative, role_instruction, conversation_history):
    """
    situation_narrative: the description of situation/context with the characters included (e.g., "David goes to an amusement park")
    role_instruction: the perspective/speaker instruction (e.g., "Imagine you are David and speak to his friend Sarah").
    conversation_history: the previous utterances in the conversation in a list
    """

    input_text = set_input(role_narrative, role_instruction, conversation_history) 

    inputs = tokenizer([input_text], return_tensors="pt").to(device)
    outputs = model.generate(inputs["input_ids"], max_new_tokens=128, temperature=1.0, top_p=.95, do_sample=True)
    response = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)

    return response

situation = "Cosmo had a really fun time participating in the EMNLP conference at Abu Dhabi."
instruction = "You are Cosmo and you are talking to a friend." # You can also leave the instruction empty

conversation = [
    "Hey, how was your trip to Abu Dhabi?"
]

response = generate(situation, instruction, conversation)
print(response)
```

### Further Details, Social Impacts, Bias, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2212.10465).
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

## Additional Information

For a brief summary of our paper, please see this [tweet](https://twitter.com/hyunw__kim/status/1605400305126248448).

### Citation

Please cite our work if you find the resources in this repository useful:
```
@article{kim2022soda,
    title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
    author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
    journal={ArXiv},
    year={2022},
    volume={abs/2212.10465}
}
```