Papers
arxiv:2406.11194

In-Context Editing: Learning Knowledge from Self-Induced Distributions

Published on Jun 17
· Submitted by syqi on Jun 18
Authors:
,
,
,

Abstract

The existing fine-tuning paradigm for language models is brittle in knowledge editing scenarios, where the model must incorporate new information without extensive retraining. This brittleness often results in overfitting, reduced performance, and unnatural language generation. To address this, we propose Consistent In-Context Editing (ICE), a novel approach that leverages the model's in-context learning capability to tune toward a contextual distribution rather than a one-hot target. ICE introduces a straightforward optimization framework that includes both a target and a procedure, enhancing the robustness and effectiveness of gradient-based tuning methods. We provide analytical insights into ICE across four critical aspects of knowledge editing: accuracy, locality, generalization, and linguistic quality, showing its advantages. Experimental results across four datasets confirm the effectiveness of ICE and demonstrate its potential for continual editing, ensuring that updated information is incorporated while preserving the integrity of the model.

Community

Paper author Paper submitter
•
edited Jul 19

This paper proposes Consistent In-Context Editing (ICE), an approach for tuning language models through contextual distributions, overcoming the limitations of traditional fine-tuning methods that learn towards one-hot targets.
The code is available at: https://github.com/bigai-ai/ICE.

·

Looks great! Any plans for the code release?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.11194 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.11194 in a Space README.md to link it from this page.

Collections including this paper 4