AliiaR's picture
Update README.md
e7cd5b0
|
raw
history blame
656 Bytes
metadata
language:
  - en
pipeline_tag: conversational
tags:
  - psychology
  - dialogues
  - empathy
  - gpt2

It was trained on a large corpus of text, including some emotionally engaging datasets such as the "Facebook Empathetic Dialogues" dataset containing 25k conversations. A dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.

>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")

>>> model = AutoModelForCausalLM.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")