JingyaoLi commited on
Commit
189437c
·
1 Parent(s): b31da33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -20,10 +20,14 @@ tags:
20
 
21
  ## Abstract
22
  The crux of effective out-of-distribution (OOD) detection lies in acquiring a robust in-distribution (ID) representation, distinct from OOD samples. While previous methods predominantly leaned on recognition-based techniques for this purpose, they often resulted in shortcut learning, lacking comprehensive representations. In our study, we conducted a comprehensive analysis, exploring distinct pretraining tasks and employing various OOD score functions. The results highlight that the feature representations pre-trained through reconstruction yield a notable enhancement and narrow the performance gap among various score functions. This suggests that even simple score functions can rival complex ones when leveraging reconstruction-based pretext tasks. Reconstruction-based pretext tasks adapt well to various score functions. As such, it holds promising potential for further expansion. Our OOD detection framework, MOODv2, employs the masked image modeling pretext task. Without bells and whistles, MOODv2 impressively enhances 14.30% AUROC to 95.68% on ImageNet and achieves 99.98% on CIFAR-10.
23
- ![framework](imgs/framework.png)
 
 
24
 
25
  ## Performance
26
- ![table](imgs/moodv2_table.png)
 
 
27
 
28
  ## Usage
29
  To predict an input image is in-distribution or out-of-distribution, we support the following OOD detection methods:
 
20
 
21
  ## Abstract
22
  The crux of effective out-of-distribution (OOD) detection lies in acquiring a robust in-distribution (ID) representation, distinct from OOD samples. While previous methods predominantly leaned on recognition-based techniques for this purpose, they often resulted in shortcut learning, lacking comprehensive representations. In our study, we conducted a comprehensive analysis, exploring distinct pretraining tasks and employing various OOD score functions. The results highlight that the feature representations pre-trained through reconstruction yield a notable enhancement and narrow the performance gap among various score functions. This suggests that even simple score functions can rival complex ones when leveraging reconstruction-based pretext tasks. Reconstruction-based pretext tasks adapt well to various score functions. As such, it holds promising potential for further expansion. Our OOD detection framework, MOODv2, employs the masked image modeling pretext task. Without bells and whistles, MOODv2 impressively enhances 14.30% AUROC to 95.68% on ImageNet and achieves 99.98% on CIFAR-10.
23
+ <p align="center">
24
+ <img src="imgs/framework.png" alt="framework" width="750">
25
+ </p>
26
 
27
  ## Performance
28
+ <p align="center">
29
+ <img src="imgs/moodv2_table.png" alt="table" width="900">
30
+ </p>
31
 
32
  ## Usage
33
  To predict an input image is in-distribution or out-of-distribution, we support the following OOD detection methods: