Jian Hu's picture
1 3

Jian Hu

lwpyh
ยท

AI & ML interests

Knowledge Transfer, Semi-supervised Learning

Recent Activity

Organizations

None yet

Posts 1

view post
Post
575
Is Hallucination Always Harmful? Unlike traditional approaches that view hallucinations as detrimental, our work in NeurIPS'24 proposes a novel perspective: hallucinations as intrinsic prior knowledge. Derived from the commonsense knowledge acquired during pre-training, these hallucinations are not merely noise but a source of task-relevant information. By leveraging hallucinations as a form of prior knowledge, we can effectively mine difficult samples without the need for customized prompts, streamlining tasks like camouflage sample detection and medical image segmentation.

Check out our paper for more insights and detailed methodologies:https://huggingface.co/papers/2408.15205

models

None public yet

datasets

None public yet