ZJUKLAB at SemEval-2025 Task 4: Unlearning via Model Merging Paper • 2503.21088 • Published 3 days ago • 6
ZJUKLAB at SemEval-2025 Task 4: Unlearning via Model Merging Paper • 2503.21088 • Published 3 days ago • 6 • 2
ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving Systems Paper • 2503.20756 • Published 4 days ago • 6
ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving Systems Paper • 2503.20756 • Published 4 days ago • 6
ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving Systems Paper • 2503.20756 • Published 4 days ago • 6 • 2
LookAhead Tuning: Safer Language Models via Partial Answer Previews Paper • 2503.19041 • Published 6 days ago • 5
LookAhead Tuning: Safer Language Models via Partial Answer Previews Paper • 2503.19041 • Published 6 days ago • 5 • 3
CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners Paper • 2503.16356 • Published 10 days ago • 15
CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners Paper • 2503.16356 • Published 10 days ago • 15
CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners Paper • 2503.16356 • Published 10 days ago • 15 • 2
BiasEdit: Debiasing Stereotyped Language Models via Model Editing Paper • 2503.08588 • Published 19 days ago • 6
LLM-Microscope: Uncovering the Hidden Role of Punctuation in Context Memory of Transformers Paper • 2502.15007 • Published Feb 20 • 169
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training Paper • 2502.11196 • Published Feb 16 • 22 • 6
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training Paper • 2502.11196 • Published Feb 16 • 22 • 6