|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue |
|
|
|
We present our dialogue-pertaining model, FutureTOD, which distills future knowledge into the representation of the previous dialogue context using a self-training framework. Extensive experiments on diverse downstream dialogue tasks demonstrate the effectiveness of our model, especially its generalization, robustness, and ability to learn discriminative dialogue representations. |
|
|
|
[This paper](https://arxiv.org/abs/2306.10315) has been accepted at the ACL 2023 Main Conference. |
|
|
|
## Usage |
|
|
|
We release our futuretod-base-v1.0 model here. You can use this model for downstream TOD tasks follow instructions in [FutureTOD](https://github.com/Zeng-WH/FutureTOD). |
|
|
|
## Quotation |
|
|
|
If you find our work helpful, please consider quoting the following papers. |
|
|
|
``` |
|
@article{zeng2023futuretod, |
|
title={FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue}, |
|
author={Zeng, Weihao and He, Keqing and Wang, Yejie and Zeng, Chen and Wang, Jingang and Xian, Yunsen and Xu, Weiran}, |
|
journal={arXiv preprint arXiv:2306.10315}, |
|
year={2023} |
|
} |
|
``` |
|
|