{ "ID": "1UbNwQC89a", "Title": "RGI: robust GAN-inversion for mask-free image inpainting and unsupervised pixel-wise anomaly detection", "Keywords": "Robust GAN-inversion, Mask-free Semantic Inpainting, Unsupervised Pixel-wise Anomaly Detection", "URL": "https://openreview.net/forum?id=1UbNwQC89a", "paper_draft_url": "/references/pdf?id=lOEXrNStVs", "Conferece": "ICLR_2023", "track": "Unsupervised and Self-supervised learning", "acceptance": "Accept: poster", "review_scores": "[['3', '6', '3'], ['4', '6', '3'], ['4', '6', '4'], ['3', '6', '3'], ['3', '5', '3']]", "input": { "source": "CRF", "title": "RGI: robust GAN-inversion for mask-free image inpainting and unsupervised pixel-wise anomaly detection", "authors": [], "emails": [], "sections": [ { "heading": "1 INTRODUCTION", "text": "When trained on large-scale natural image datasets, GAN (Goodfellow et al., 2020) is a good approximator of the underlying true image manifold. It captures rich knowledge of natural images and can serve as an image prior. Recently, utilizing the learned prior through GANs shows impressive results in various tasks, including the image restoration (Yeh et al., 2017; Pan et al., 2021; Gu et al., 2020), unsupervised anomaly detection (Schlegl et al., 2017; Xia et al., 2022b) and so on. In those applications, GAN learns a deep generative image prior (DGP) to approximate the underlying true image manifold. Then, for any input image, GAN-inversion (Zhu et al., 2016) is used to search for the nearest image on the learned manifold, i.e., recover the d-dimensional latent vector z\u0302 by\nz\u0302 = arg min z\u2208Rd Lrec(x,G(z)), (1)\nwhere G(\u00b7) is the pre-trained generator, x is the input image, and Lrec(\u00b7, \u00b7) is the loss function measuring the distance between x and the restored image x\u0302 = G(z\u0302), such as l1, l2-norm distance and perceptual loss (Johnson et al., 2016), or combinations thereof.\nHowever, this approach may fail when x is grossly corrupted by unknown corruptions, i.e., a small fraction of pixels are completely corrupted with unknown locations and magnitude. For example, in semantic image inpainting (Yeh et al., 2017), where the corruptions are unknown missing regions, a pre-configured missing regions\u2019 segmentation mask is needed to exclude the missing regions\u2019 influence on the optimization procedure. Otherwise, the restored image will easily deviate from the ground truth image (Figure 1). For another example, in unsupervised anomaly detection (Schlegl et al., 2017), where the anomalies naturally occur as unknown gross corruptions and the residual\nbetween the input image and the restored image is adopted as the anomaly segmentation mask, i.e., x \u2212 G(z\u0302), such a deviation will deteriorate the segmentation performance. However, the assumption of knowing a pre-configured corrupted region mask can be strong (for semantic inpainting) or even invalid (for unsupervised anomaly detection). Therefore, improving the robustness of GANinversion under unknown gross corruptions is important.\nAnother problem is the GAN approximation gap between the GAN learned image manifold and the true image manifold, i.e., even without corruptions, the restored image x\u0302 from Equation 1 can contain significant mismatches to the input image x. This limits the performance of GAN-based methods for semantic inpainting and, especially for unsupervised anomaly detection since any mismatch between the restored image and the input image will be counted towards the anomaly score. When a segmentation mask of the corrupted region is known, such an approximation gap can be mitigated by fine-tuning the generator (Pan et al., 2021). However, adopting such a technique under unknown gross corruptions can trivially overfit the corrupted image and fail at restoration. Therefore, mitigating GAN approximation gap under unknown gross corruptions is important.\nTo address these issues, we propose an RGI method and further generalize it to R-RGI. For any corrupted input image, the proposed method can simultaneously restore the corresponding clean image and extract the corrupted region mask. The main contributions of the proposed method are:\nMethodologically, RGI improves the robustness of GAN-inversion in the presence of unknown gross corruptions. We further prove that, under mild assumptions, (i) the RGI restored image (and identified mask) asymptotically converges to the true clean image (and the true binary mask of the corrupted region) (Theorems 1 and 2); (ii) in addition to asymptotic results, for a properly selected tuning parameter, the true mask of the corrupted region is given by simply thresholding the RGI identified mask (Theorem 2). (iii) Moreover, we generalize the RGI method to R-RGI for meaningful generator fine-tuning to mitigate the approximation gap under unknown gross corruptions.\nPractically (i) for mask-free semantic inpainting, where the corruptions are unknown missing regions, the restored background can be used to restore the missing content; (ii) for unsupervised pixel-wise anomaly detection, where the corruptions are unknown anomalous regions, the retrieved mask can be used as the anomalous region\u2019s segmentation mask. The RGI/R-RGI method unifies these two important tasks and achieves SOTA performance in both tasks." }, { "heading": "2 RELATED LITERATURE", "text": "GAN-inversion (Xia et al., 2022a) aims to project any given image to the latent space of a pretrained generator. The inverted latent code can be used for various downstream tasks, including GAN-based image editing (Wang et al., 2022a), restoration (Pan et al., 2021), and so on. GAN-\ninversion can be categorized into learning-based, optimization-based, and hybrid methods. The objective of the learning-based inversion method is to train an encoder network to map an image into the latent space based on which the reconstructed image closely resembles the original. Despite its fast inversion speed, learning-based inversion usually leads to poor reconstruction quality (Zhu et al., 2020; Richardson et al., 2021; Creswell & Bharath, 2018). Optimization-based methods directly solve a latent code that minimizes the reconstruction loss in Equation 1 through backpropagation (which can be time-consuming), with superior image restoration quality. Hybrid methods balance the trade-off between the aforementioned two methods (Xia et al., 2022a). There are also different latent spaces to be projected on, such as the Z space applicable for inverting all GANs, mZ space (Gu et al., 2020), W and W+ spaces for StyleGANs (Karras et al., 2019; Abdal et al., 2019; 2020) and so on. All these methods do not have explicit robustness guarantees with respect to gross corruptions 1. To improve the robustness of GAN-inversion, MimicGAN (Anirudh et al., 2018) uses a surrogate network to mimic the unknown gross corruptions at the test time. However, this method requires multiple test images with the same corruptions to learn a surrogate network. Here, we focus on developing a robust GAN-inversion for optimization-based methods, projecting onto the most commonly used Z space, with a provable robustness guarantee. The proposed method can be applied to a single image with unknown gross corruptions, and has the potential to be applied to learningbased as well as hybrid methods, even for different latent spaces, to increase their robustness.\nAs mentioned in the Introduction, DGP plays an important role in corrupted image restoration. GAN-inversion is an effective way of exploiting the DGP captured by a GAN. Therefore, GANinversion gains popularity in two important applications of corrupted image restoration: semantic image inpainting and unsupervised anomaly detection. (Comprehensive reviews on semantic image inpainting and unsupervised anomaly detection are provided in Appendix A.1 and A.2.)\nMask-free Semantic inpainting aims to restore the missing region of an input image with little or no information on the missing region in both the training and testing stages. GAN-inversion for semantic inpainting was first introduced by Yeh et al. (2017) and was further developed by Gu et al. (2020); Pan et al. (2021); Wang et al. (2022b); El Helou & Su\u0308sstrunk (2022) for improving inpainting quality. Current GAN-inversion based methods have the advantage of inpainting a single image with arbitrary missing regions, without any requirement for missing region mask information in the training stage. However, they do require a pre-configured missing region mask for reliable inpainting of a corrupted test image. Otherwise, the restored image can deviate from the true image. Moreover, the pre-configured corrupted region mask is also the key in mitigating the GAN approximation gap (Pan et al., 2021) through generator fine-tuning. Such a pre-configured corrupted region mask requirement hinders the application of GAN-inversion in mask-free semantic inpainting.\nUnsupervised pixel-wise anomaly detection aims to extract a pixel-level segmentation mask for anomalous regions, which plays an important role in industrial cosmetic defect inspection and medical applications (Yan et al., 2017; Baur et al., 2021). Unsupervised pixel-wise anomaly detection extracts the anomalous region segmentation mask through a pixel-wise comparison of the input image and corresponding normal background, which requires a high-quality background reconstruction based on the input image (Cui et al., 2022). GANs have the advantage of generating realistic images from the learned manifold with sharp and clear detail, which makes GAN-inversion a promising tool for background reconstruction in pixel-wise anomaly detection. GAN-inversion for unsupervised anomaly detection (Xia et al., 2022b) was first introduced by (Schlegl et al., 2017) and various followup works have been proposed (Zenati et al., 2018; Schlegl et al., 2019; Baur et al., 2018; Kimura et al., 2020; Akcay et al., 2018). Counter-intuitively, instead of pixel-wise anomaly detection, the applications of GAN-based anomaly detection methods mainly focus on image-level/localization level (Xia et al., 2022a) with less satisfactory performance. For example, as one of the benchmark methods on the MVTec dataset (Bergmann et al., 2019), AnoGAN (Schlegl et al., 2017) performs the worst on image segmentation (even localization) compared to vanilla AE, not to mention the state-of-the-art methods (Yu et al., 2021; Roth et al., 2022). This is due to two intrinsic issues of GAN-inversion under unknown gross corruptions: (i) Lack of robustness: due to the existence of the anomalous region, the reconstructed normal background can easily deviate from the ground truth background (Figure 1); (ii) Gap between the approximated and actual manifolds (Pan et al., 2021): even for a clean input image, it is difficult to identify a latent representation that can achieve perfect\n1Note that the \u201crobustness to defects\u201d mentioned in (Abdal et al., 2019) means that the image together with the defects can be faithfully restored in the latent space, instead of restoring a defect-free image\nreconstruction. When the residual is used for pixel-wise anomaly detection, those two issues will easily deteriorate its performance." }, { "heading": "3 ROBUST GAN-INVERSION", "text": "In this section, we first give a problem overview. Then, we present the RGI method with a theoretical justification of its asymptotic robustness properties. A simulation study is conducted to verify the robustness. Next, we generalize the proposed method to R-RGI to mitigate the GAN approximation gap. Finally, we give a discussion that connects the proposed method with existing methods.\nOverview. Given a pre-trained GAN network on a large-scale clean image dataset, such that the generator learns the image manifold. For any input image from the same manifold with unknown gross corruptions, we aim to restore a clean image and a corrupted region mask (Figure 2).\nNotation. Before we introduce the RGI method, we first introduce the following notations: For any positive integer k, we use [k] to denote the set {1, 2, . . . , k}; For any index set \u039b \u2286 [m] \u00d7 [n], we use |\u039b| to denote the cardinality of \u039b; For any matrix T , the l-norm \u2225T\u2225l (e.g. \u2225T\u22251, \u2225T\u2225\u221e) is calculated by treating T as a vector, and we use IT to denote the non-zero mask of T , i.e. (IT )ij = 0 if Tij = 0 and 1 otherwise; For any two sets A and B, we use dHl (A,B) := supa\u2208A infb\u2208B \u2225a \u2212 b\u2225l to denote the one-sided l-norm Hausdorff distance between A and B, noting that when A is a singleton, it becomes the standard l-norm distance dl(a,B) := infb\u2208B \u2225a \u2212 b\u2225l; For any two matrices A and B, we use \u2299 to denote element-wise product, i.e., (A\u2299B)ij = AijBij ." }, { "heading": "3.1 ROBUST GAN-INVERSION", "text": "Assume that the GAN learns an accurate image manifold, i.e., there is no approximation gap between the GAN learned image manifold and true image manifold, such that any input image x \u2208 Rm\u00d7n with gross corruptions s\u2217 \u2208 Rm\u00d7n follows: x = G(z\u2217) + s\u2217, where z\u2217 \u2208 Rd is the true latent code and G(\u00b7) is a pre-trained generator, i.e., G(\u00b7) : Rd \u2192 Rm\u00d7n. Further, assume that s\u2217 admits sparsity property, i.e., \u2225s\u2217\u22250 \u2264 n0, where n0 is the number of corrupted pixels. Given x, we aim to restore G(z\u2217) (or z\u2217), and consequently, achieve (i) semantic image inpainting, i.e., G(z\u2217), or (ii) pixel-wise anomaly detection, i.e., M\u2217 = Ix\u2212G(z\u2217). To achieve so, we propose to learn the latent representation z and the corrupted region mask M at the same time, i.e.,\nmin z\u2208Rd,M\u2208Rm\u00d7n Lrec((1\u2212M)\u2299 x, (1\u2212M)\u2299G(z)) s.t. \u2225M\u22250 \u2264 n0. (2) The reconstruction loss term Lrec(\u00b7, \u00b7) measures the distance between the input image and the generated one outside the corrupted region, which guides the optimization process to find the latent\nvariable z. Intuitively, when solving for the mask along with the latent variable, we aim to allocate the n0 mask elements such that reconstruction loss is minimized. It is easy to check that the true solution G(z\u2217) and M\u2217 is optimal. Moreover, if we assume that z\u2217 is the only latent code such that \u2225x\u2212G(z\u2217)\u22250 \u2264 n0, then we have uniqueness. However, Equation 2 with \u2225 \u00b7 \u22250 is hard to solve. To address this issue, we relax Equation 2 to an unconstrained optimization problem that can be solved directly using gradient decent algorithms:\nmin z\u2208Rd,M\u2208Rm\u00d7n\nLrec((1\u2212M)\u2299 x, (1\u2212M)\u2299G(z)) + \u03bb\u2225M\u22251. (3)\nEquation 3 is named RGI, where the second term penalizes the mask size to avoid a trivial solution with the mask expanding to the whole image. Intuitively, the second term encourages a small mask; however, the reconstruction loss will increase sharply once the learned mask cannot cover the corrupted region. By carefully selecting the tuning parameter \u03bb, we will arrive at a solution with (i) a high-quality image restoration with negligible reconstruction error; (ii) an accurate mask that covers the corrupted region. The following two theorems justify our intuition.\nTheorem 1 (Asymptotic optimality of z) Assume (i) GAN learns an accurate image manifold, i.e., there exists z\u2217 such that \u2225x\u2212G(z\u2217)\u22250 \u2264 n0; (ii) z is bounded for both Equation 2 and Equation 3, or equivalently there exists R > 0 such that \u2225z\u22251 \u2264 R, i.e., z \u2208 Sd with Sd := [\u2212R,R]d; (iii) Lrec(\u00b7) = \u2225 \u00b7 \u222522; and (iv) G(z) is continuous. Let z\u0302(\u03bb) be any optimal z solution of Equation 3 with tuning parameter \u03bb, and Z\u2217 be the optimal z solution set of Equation 2, we have d\u221e(z\u0302(\u03bb),Z\u2217) \u2193 0 as \u03bb \u2193 0. Moreover, denote n\u0303 = minz\u2208Sd \u2225x\u2212G(z)\u22250 and Z\u0303 = {z \u2208 Sd | \u2225x\u2212G(z)\u22250 = n\u0303}, we have d\u221e(z\u0302(\u03bb), Z\u0303) \u2193 0 as \u03bb \u2193 0. If we further assume a unique z\u2217 = argminz\u2208Sd \u2225x\u2212G(z)\u22250, i.e., Z\u0303 = {z\u2217}, then z\u0302(\u03bb) \u2192 z\u2217 as \u03bb \u2193 0.\nNote that Assumption (ii) is only for the proof purpose. We could always choose a large enough R to include all possible optimal solutions so that the optimality of Equation 2 and Equation 3 remains. Remark: Theorem 1 states that the optimal z solution of the RGI method, G(z\u0302), converges to the true background G(z\u2217) as \u03bb \u2193 0, regardless of the corruption magnitude, which proves the robustness of the RGI method to unknown gross corruptions, and is the key to image restoration.\nTheorem 2 (Asymptotic optimality of M ) Follow the same assumptions and notations in Theorem 1. Let M\u0302(\u03bb) be any optimal M solution of (3), and M\u0303 := {I(x\u2212G(z\u0303))|z\u0303 \u2208 Z\u0303} \u2286 {M \u2208 {0, 1}m\u00d7n|\u2225M\u22250 \u2264 n\u0303}. We have d\u221e(M\u0302(\u03bb),M\u0303) \u2193 0 as \u03bb \u2193 0. Moreover, there is a finite \u03bb\u0303 > 0 such that for any \u03bb \u2264 \u03bb\u0303, there is an M\u0303 \u2208 M\u0303 such that M\u0303 = IM\u0302(\u03bb). If we further assume Z\u0303 = {z\u2217}, then (i) M\u0302(\u03bb) \u2192 M\u2217 as \u03bb \u2193 0, and (ii) for any \u03bb \u2264 \u03bb\u0303, IM\u0302(\u03bb) = M\u2217.\nRemark: Theorem 2 states that the optimal M solution of the RGI method, M\u0302 , converges to the true corrupted region mask M\u2217 as \u03bb \u2193 0, regardless of the corruption magnitude. Moreover, there is a fixed \u03bb\u0303, if we choose a tuning parameter \u03bb \u2264 \u03bb\u0303, the true corrupted regions mask can be identified by simply thresholding the M\u0302 , i.e., M\u2217 = IM\u0302(\u03bb), which is the key for pixel-wise anomaly detection. The proof of Theorems 1 and 2 are provided in Appendix B. A simulation study to verify the robustness of proposed RGI method is provided in Appendix C." }, { "heading": "3.2 RELAXED ROBUST GAN-INVERSION", "text": "In traditional GAN-inversion methods (Yeh et al., 2017; Pan et al., 2021), without mask information, fine-tuning the generator parameters will lead to severe overfitting towards the input image. However, fine-tuning is the key step to mitigate the gap between the learned image manifold and any specific input image (Pan et al., 2021). The proposed approach makes fine-tuning possible, i.e.,\nmin z\u2208Rd,M\u2208Rm\u00d7n,\u03b8\u2208Rw\nLrec((1\u2212M)\u2299 x, (1\u2212M)\u2299G(z; \u03b8)) + \u03bb\u2225M\u22251. (4)\nEquation 4 is named R-RGI. This problem can also be solved directly using gradient decent types of algorithm with carefully designed parameters for learning parameter \u03b8. We found the following strategy gives better performance: at the beginning of the solution process, we fix \u03b8. When we get a stable reconstructed image as well as the mask, then we optimize \u03b8 together with all the other decision variables with a small step size for limited iterations.\nIn this section, we address the robustness of GAN-inversion methods and show the asymptotic optimality of the RGI method. Moreover, the R-RGI enables fine-tuning of the learned manifold towards a specific image for better restoration quality and thus improves the performance of both tasks." }, { "heading": "3.3 DISCUSSIONS", "text": "Connection to robust machine learning methods. The RGI method roots in robust learning methods (Caramanis et al., 2012; Gabrel et al., 2014), which aims to restore a clean signal (or achieve robust parameter estimation) in the presence of corrupted input data. Robust machine learning methods, including robust dimensionality reduction (Cande\u0300s et al., 2011; Xu et al., 2010; Peng et al., 2012), matrix completion (Cande\u0300s & Recht, 2009; Jain et al., 2013), use statistical priors to model the signal to be restored, such as low rank, total variation, etc. Those statistical priors limit its applications involving complex natural images, e.g., the restoration of corrupted human face images.\nSimilarly, we also aim for signal restoration from a corrupted input signal, but with two key differences: (i) instead of the restrictive statistical priors, we adopt a learned deep generative prior (Pan et al., 2021), i.e., G(z), which plays a key role in modeling complex natural images. (ii) instead of recovering the corruptions, we learn a sparse binary mask M that covers the corrupted region, which is much easier than learning corruptions itself. The RGI method significantly extends the traditional robust machine learning methods to a wider range of applications.\nConnection to robust statistics. The proposed method also has a deep connection with traditional robust statistics (Huber, 2011): when adopting ab l2-norm reconstruction loss as in Theorem 1, the loss function of Equation 3 can be simplified as \u2211 ij fij(z;\u03bb) where fij(z;\u03bb) ={\n(x\u2212G(z))2ij , if 2(x\u2212G(z))2ij < \u03bb \u03bb\u2212 \u03bb 2\n4(x\u2212G(z))2ij , otherwise , which shares a similar spirit as M -estimators, e.g., met-\nric Winsorizing and Tukey\u2019s biweight, thus inherits the robustness with respect to outliers. Moreover, Equation 3 allows a flexible way of incorporating robustness to reconstruction loss functions beyond convex formulations, such as the perceptual loss and discriminator loss (Pan et al., 2021)." }, { "heading": "4 CASE STUDY", "text": "" }, { "heading": "4.1 MASK-FREE SEMANTIC INPAINTING", "text": "Semantic inpainting is an important task in image editing and restoration. (Please see Appendix A.1 for a comprehensive literature review on this topic.) Among all the methods, GAN-inversion based methods have the advantage of inpainting a single image with arbitrary missing regions without any requirement for mask information in the training stage. However, the requirement of a pre-configured corrupted region mask during testing hinders its application in mask-free semantic inpainting. In this section, we aim to show that the RGI method can achieve mask-free semantic inpainting by inheriting the mask-free training nature of GAN-inversion based methods, while avoiding the pre-configured mask requirement during testing. Therefore, we will compare with the state-of-the-art GAN-inversion based image inpainting methods that projecting onto the Z space, including (a) Yeh et al. (2017) without a pre-configured mask (Yeh et al. (2017) w/o mask) baseline; (b) Yeh et al. (2017) with a pre-configured mask ((Yeh et al., 2017) w/ mask); and (c) Pan et al. (2021) with a pre-configured mask (Pan et al. (2021) w/ mask).\nDatasets and metrics. We evaluate the proposed methods on three datasets, the CelebA (Liu et al., 2015), Stanford car (Krause et al., 2013), and LSUN bedroom (Yu et al., 2015), which are commonly used for benchmarking image editing algorithms. We consider two different cases: (i) central block missing and (ii) random missing. We fill in the missing entry with pixels from N(\u22121, 1). PSNR and SSIM are used for performance evaluation. Please see Appendix D for implementation details.\nComparison Results. The PSNR and SSIM of image restoration are shown in Table 1. We can observe that (i) the RGI outperforms the Yeh et al. (2017) w/o mask baseline, and achieves a comparable performance with Yeh et al. (2017) w/ mask \u2013 the best possible result without fine-tuning the generator. However, there is no pre-configured mask requirement in the RGI method, which demonstrates RGI\u2019s robustness to unknown gross corruptions. Such performance improvement is significant, especially on CelebA dataset, where GAN learns a high quality face manifold (high\nSSIM/PSNR value in Yeh et al. (2017) w/ mask). (ii) the R-RGI further improves the image restoration performance with fine-tuning the generator, which achieves a comparable performance with the (Pan et al., 2021) w/ mask \u2013 the best possible result with fine-tuning the generator. Such performance improvement is significant, especially on Stanford cars and LSUN bedroom datasets, where even the performance of Yeh et al. (2017) w/ mask is limited, indicating a large GAN approximation gap. As shown in Figure 3, the mask-free generator fine-tuning by R-RGI guarantees a high-quality image restoration. More qualitative results are in Appendix D." }, { "heading": "4.2 UNSUPERVISED PIXEL-WISE ANOMALY DETECTION", "text": "Unsupervised pixel-wise anomaly detection is becoming important in product cosmetic inspection. The extracted pixel-wise accurate defective region masks are then used for various downstream tasks, including aiding pixel-wise annotation, providing precise defect specifications (i.e. diameter, area) for product surface quality screening, and so on, which cannot be achieved by current sample level/localization level algorithms. The RGI/R-RGI method is developed for such an unsupervised fine-grained surface quality inspection task in a data-rich but defect/annotation-rare environment, which is common for mass production such as consumer electronics, steel manufacturing, and so on. In those applications, it is cheap to collect a large number of defect-free product images, while\nexpensive and time-consuming to collect and annotate defective samples due to the super-high yield rate and expert annotation requirements.\nThere are three categories of unsupervised pixel-wise anomaly detection methods, including robust optimization based methods, deep reconstruction based methods (including GAN-inversion based methods), and deep representation based methods (Cui et al., 2022) (Please see Appendix A.2 for a comprehensive literature review). In addition to the AnoGAN (Schlegl et al., 2017) baseline, we will compare with the SOTA method in each category, including the RASL (Peng et al., 2012), the SOTA method in robust optimization method, which improves the RPCA (Cande\u0300s et al., 2011) to solve the linear misalignment issue; DRAEM (Zavrtanik et al., 2021) which is the SOTA method in deep-reconstruction based methods; and PatchCore (Roth et al., 2022), a representative deep representation based method that performs the best on the MVTec (Bergmann et al., 2019) dataset.\nWe aim to show that with a simple robustness modification, the RGI/R-RGI will significantly improve the baseline AnoGAN\u2019s performance and outperform the SOTA. We use a PGGAN (Karras et al., 2017) as the backbone network and a l2 norm reconstruction loss term (Lrec). For AnoGAN, the pixel-wise reconstruction residual |x\u0302 \u2212 x| is used as the defective region indicator. We apply a simple thresholding of the residual and report the Dice coefficient of the best performing threshold.\nDatasets. We notice that the popular benchmark datasets, including MVTec AD (Bergmann et al., 2019) and BTAD (Mishra et al., 2021) for industrial anomaly detection, are not suitable for this task due to the following reasons: (i) the annotation of those datasets tends to cover extra regions of the real anomaly contour, which favors localization level methods. (ii) The number of clean images in most of the categories is small (usually 200 \u223c 300 images), which may not be sufficient for GAN training. A detailed discussion of the MVTec dataset can be found in Appendix E.\nTo gain a better control of the defect annotation and better reflect the data-rich but defect/annotationrare application scenario, we generate a synthetic defect dataset based on Product03 from the BTAD (Mishra et al., 2021) dataset. The synthetic dataset contains 900 defect-free images for training and 4 types of defects for testing, including crack, scratch, irregular, and mixed large (100 images in each category). Qualitative and quantitative comparisons with SOTA methods will be conducted on this dataset. The synthetic defect generation process are provided in Appendix F.\nMetrics. We use the Dice coefficient to evaluate the pixel-wise anomaly detection performance, which is widely adopted for image segmentation tasks. Dice coefficient is defined as (2\u2225M\u0302 \u2299 M\u22250)/(\u2225M\u0302\u22250 + \u2225M\u22250), where M\u0302 \u2208 Rm\u00d7n is the predicted binary segmentation mask for the anormalous region and M \u2208 Rm\u00d7n is the true binary segmentation mask with 1 indicating the defective pixels and 0 otherwise. Notice that pixel-wise AUROC score (Bergmann et al., 2019) is sensitive to class imbalance, which may give misleading results in defective region segmentation tasks when the defective region only covers a small portion of pixels in the whole image (This is often the case in industrial cosmetic inspection or medical applications (Baur et al., 2021; Mou et al., 2022; Zavrtanik et al., 2021). We mainly compare the Dice coefficients for different methods.\nCompare with the AnoGAN baseline on on synthetic defect dataset. The results are shown in Table 2 and Figure 4 (a). (i) Compared to AnoGAN(Schlegl et al., 2017), the only modification in the RGI method is the additional sparsity penalty term of the anomalous region mask M to enhance its robustness. However, with such a simple modification, RGI significantly outperforms the AnoGAN\u2019s performance under large defects (\u2018mix large\u2019), where the large anomalous region can easily lead to a deviation between the AnoGAN restored image and the real background. (ii) The R-RGI achieves a significant and consistent performance improvement over the RGI and AnoGAN methods. The generator fine-tuning process closes the gap between the GAN learned normal background manifold and the specific test image, which leads to better background restoration and mask refinement. The implementation details and more qualitative results can be found in Appendix G.\nCompare with the SOTA methods on on synthetic defect dataset. The results are shown in Table 2 and Figure 4 (b). R-RGI method performs the best in all defect types. The limited modeling capability of the low-rank prior used in RASL (Peng et al., 2012) leads to its bad performance; As a localization level method, PatchCore (Roth et al., 2022) can successfully localize the defect (Figure (Roth et al., 2022)). However, the loss of resolution deteriorates its pixel-level anomaly detection performance; The DRAEM (Zavrtanik et al., 2021) jointly trains a reconstructive sub-network and a discriminative sub-network with additional simulated anomaly samples on top of the clean training images. Its performance highly relies on the coverage of the simulated anomaly samples and is more\nsensitive to large anomalies. More importantly, by incorporating the so-called mask free fine-tuning, the R-RGI method successfully improves the baseline AnoGAN method\u2019s performance over those SOTA methods on this task. More qualitative results can be found in Appendix G." }, { "heading": "5 CONCLUSION", "text": "Robustness has been a long pursuit in the field of signal processing. Recently, utilizing GANinversion for signal restoration gains popularity in various signal processing applications, since it demonstrates strong capacity in modeling the distribution of complex signals such as natural images. However, there is no robustness guarantee in the current GAN-inversion method.\nTo improve the robustness and accuracy of GAN-inversion in the presence of unknown gross corruptions, we propose an RGI method. Furthermore, we prove the asymptotic robustness of the proposed method, i.e., (i) the restored signal from RGI converges to the true clean signal (for image restoration); (ii) the identified mask converges to the true corrupted region mask (for anomaly detection). Moreover, we generalize RGI method to R-RGI method to close the GAN approximation gap, which further improves the image restoration and unsupervised anomaly detection performance.\nThe RGI/R-RGI method unifies two important tasks under the same framework and achieves SOTA performance: (i) Mask-free semantic inpainting, an important computer vision task, which aims at a reasonable image restoration from the input missing one; (i) Unsupervised pixel-wise anomaly detection, an important problem in cosmetic inspection in mass production, which seeks an optimal segmentation mask that covers the anomalous region on the image." }, { "heading": "A COMPREHENSIVE LITERATURE REVIEW", "text": "This section provides comprehensive literature reviews of mask-free semantic inpainting and unsupervised pixel-wise anomaly detection, including but not limited to GAN-inversion based methods.\nA.1 COMPREHENSIVE LITERATURE REVIEW FOR MASK-FREE SEMANTIC INPAINTING\nMask-free Semantic inpainting aims to restore the corrupted region of an input image with little or no information on the corruptions. To achieve this goal, multiple traditional single image semantic inpainting methods exploit fixed image priors, including total variation (Afonso et al., 2010; Shen & Chan, 2002), low rank (Hu et al., 2012), patch off set statistics (He & Sun, 2012) and so on.\nHowever, due to the fixed image prior, those method have strong assumptions on the input image, such as smoothness, containing similar structure or patches, which may fail when dealing with large missing regions with novel content, i.e., recovering the nose or mouth in facial images (Yeh et al., 2017).\nNotice that various convolutional neural network based methods for semantic inpainting has been proposed (Pathak et al., 2016; Iizuka et al., 2017; Li et al., 2017; Yu et al., 2018; Liu et al., 2018; Yu et al., 2019; Li et al., 2020; Suvorov et al., 2022; Song et al., 2018; Yan et al., 2018; Liu et al., 2019a;b; Nazeri et al., 2019; Ren et al., 2019; Xiong et al., 2019; Zeng et al., 2019; 2020; Zhao et al., 2021; Zhu et al., 2021). In addition to the requirement of a pre-configured mask for inpainting an input image, they usually need mask information in the training stage, either the same fixed mask as the region to be inpainted (Pathak et al., 2016), or trying to cover irregular missing regions by randomly sampling a rectangular mask with random location (Yang et al., 2017), or a fixed set of irregular masks (Liu et al., 2018) or generating masks following a set of rules (Yu et al., 2019; Zhao et al., 2021). Those methods cannot fulfill the mask-free semantic inpainting goal.\nAnother closely related field is named blind image inpainting, which aims to inpaint a single corrupted image without the need for the corrupted regions mask (Liu et al., 2019b; Wang et al., 2020; Qian et al., 2018; Wu et al., 2019; El Helou & Su\u0308sstrunk, 2022). However, most of them need a training set of possible corruptions (and/or corrupted region masks), which again restricts their ability to generalize to unknown gross corruptions. Thus, they are not mask-free methods.\nGAN-inversion for semantic inpainting was first introduced by Yeh et al. (2017) and was further developed by Gu et al. (2020); Pan et al. (2021); Wang et al. (2022b) for improving inpainting quality. They have the advantage of inpainting a single image with arbitrary missing regions, without any requirement for mask information in the training stage. However, they do require a pre-configured corrupted region mask for reliable inpainting during testing.\nThe RGI method inherits the mask-free training nature of GAN-inversion based semantic inpainting methods, while avoiding the pre-configured mask requirement during testing. Thus, we can achieve mask-free semantic inpainting for a single test image with arbitrary gross corruptions.\nA.2 COMPREHENSIVE LITERATURE REVIEW FOR UNSUPERVISED PIXEL-WISE ANOMALY DETECTION\nUnsupervised pixel-wise anomaly detection aims to extract a pixel-level segmentation mask for anomalous regions, which plays an important role in industrial and medical applications (Yan et al., 2017; Baur et al., 2021). Unlike image-level (identify anomalous samples) or localization level (i.e., localize anomaly) anomaly detection, unsupervised pixel-wise anomaly detection extracts the anomalous region segmentation mask through a pixel-wise comparison of the input image and corresponding normal background. Therefore, it requires a high-quality background reconstruction based on the input image. To achieve this goal, robust optimization methods rely on the statistical prior knowledge of the background (such as low-rank (Bouwmans & Zahzah, 2014) and smoothness (Yan et al., 2017)), which is effective when the true background satisfies those assumptions. However, such assumptions can be restrictive and highly dependent on the properties of background image for specific applications. In contrast, the deep reconstruction based methods (Pang et al., 2021) methods reconstruct the normal background from a learned subspace and assume such a subspace does not generalize to anomalies. Autoencoder (AE) (Bergmann et al., 2018), variational AE (VAE) (Kingma & Welling, 2013) and its variants (please see the review paper (Baur et al., 2021)) are popular tools. However, such assumptions may not always hold, i.e., an AE that achieves a satisfactory reconstruction of normal regions of the input image also \u201cgeneralize\u201d so well that it can always reconstruct the abnormal inputs as well (Gong et al., 2019). Some solutions, such as MemAE (Gong et al., 2019) and PAEDID (Mou et al., 2022) restrict this generation capability by reconstructing the background from a memory bank of clean training images, DRAEM Zavrtanik et al. (2021) restrict this generation capability by integrating an discriminated network. Another category of unsupervised approaches are deep representation-based methods, which learns the discriminate embeddings of normal images from a clean training set and achieve anomaly detection by comparing the embedding of a test image and the distribution of the normal image embeddings, such as PatchCore(Roth et al., 2022), Padim (Defard et al., 2021), Cflow (Gudovskiy et al., 2022) STFPM (Wang et al.,\n2021). Those methods usually serves as localization tools since the comparison in the embedded space will lead to the loss of resolution.\nRecently GAN-based anomaly detection methods have gained popularity in reconstruction based methods (Xia et al., 2022b). GANs have the advantage of generating realistic images from the learned manifold with sharp and clear detail, regardless of image type (Pang et al., 2021), which make GAN a promising tool for background reconstruction for pixel-wise anomaly detection. Inspired by this idea, Schlegl et al. (2017) borrowed the GAN-inversion for unsupervised anomaly detectionand various followup works has been proposed, including EBGAN (Zenati et al., 2018), fAnoGAN (Schlegl et al., 2019) and GANomaly (Akcay et al., 2018), which mainly focus on improving inference speed. Counterintuitively, instead of pixel-wise anomaly detection, the applications of GAN-based anomaly detection methods mainly focus on image-level/localization level (Xia et al., 2022a) with a less satisfactory performance. For example, as one of the benchmark methods on the MVTec dataset (Bergmann et al., 2019), AnoGAN (Schlegl et al., 2017) performs the worst on image segmentation (even localization) compared to vanilla AE, not to say the state-of-the-art methods (Yu et al., 2021; Roth et al., 2022). This is due to the intrinsic issues of GAN-inversion: (i) Lack of robustness: due to the existence of the anomalous region, the reconstructed normal background can easily deviate from the ground truth background (Figure 1); (ii) Gap between the approximated and actual manifolds (Pan et al., 2021): even for a clean input image, it is difficult to identify a latent representation that can achieve perfect reconstruction. When the residual is used for pixel-wise anomaly detection, those issues will easily deteriorate its performance.\nWe aim to demonstrate the performance improvement of GAN-based anomaly detection method by RGI, which makes the GAN-inversion-based methods practical in pixel-wise anomaly detection tasks." }, { "heading": "B PROOF TO THEOREMS 1 AND 2", "text": "B.1 PROOF TO THEOREMS 1\nUnder the assumption, there exists z\u2217 such that \u2225x\u2212G(z)\u22250 \u2264 n0. Thus (z\u2217,M\u2217) solves Equation 2 to its optimal value of 0.\nDenote Z\u2217\u2217 = {z \u2208 Sd | \u2225x\u2212G(z)\u2225 \u2264 n0}. Note that for any z \u2208 Z\u2217\u2217, we could set M = Ix\u2212G(z) with \u2225M\u22250 \u2264 n0 and then (z,M) also solves Equation 2 to its optimal value of 0. On the other hand, for any z /\u2208 Z\u2217\u2217, i.e., \u2225x \u2212 G(z)\u22250 \u2265 n0 + 1, \u2225(1 \u2212 M) \u2299 (x \u2212 G(z))\u222522 > 0 unless \u2225M\u2225 \u2265 \u2225x \u2212 G(z)\u22250 \u2265 n0 + 1, which renders such M is infeasible. Thus, we conclude that Z\u2217\u2217 = Z\u2217. The same optimality arguments apply to every z\u0303 \u2208 Z\u0303 as Z\u0303 \u2286 Z\u2217. We next prove that Equation 3 asymptotically converges to Z\u0303 , which will complete the proof. Denote f(z,M ;\u03bb) := \u2225(1 \u2212M) \u2299 (x \u2212 G(z))\u222522 + \u03bb\u2225M\u22251 the objective function of Equation 3. Select any z\u0303 \u2208 Z\u0303 and let M\u0303 = Ix\u2212G(z\u0303), and we note that f(z\u0303, M\u0303 ;\u03bb) = \u03bbn\u0303.\nNow consider for any given z, we could next calculate M\u0302(z) which minimizes f(z, M\u0302 ;\u03bb). Note that f(z,M ;\u03bb) = \u2211 i,j ((1\u2212Mij)(x\u2212G(z))2ij + \u03bb|Mij |) := \u2211 i,j fij(z,Mij ;\u03bb)\nwith \u2202f\n\u2202Mij = \u2202fij \u2202Mij = 2(x\u2212G(z))2ij(Mij \u2212 1) + \u03bb\u2202|Mij |,\nwhere \u2202|Mij | is the partial differential of |Mij |.\nIt is clear that \u2202fij\u2202Mij < 0 for Mij < 0 and \u2202fij \u2202Mij > 1 for Mij > 1, and thus the optimal M\u0302ij \u2208 [0, 1]. Within the interval (0, 1), we in addition have\n\u2202fij \u2202Mij = 2(x\u2212G(z))2ij(Mij \u2212 1) + \u03bb, (5)\nand \u2202fij\u2202Mij = 0 solves to\nM\u0302\u2217ij = 1\u2212 \u03bb\n2(x\u2212G(z))2ij .\nDiscussion:\n\u2022 If 2(x \u2212 G(z))2ij \u2265 \u03bb: \u2202fij \u2202Mij < 0 for Mij \u2208 (0, M\u0302\u2217ij) and \u2202fij \u2202Mij > 0 for Mij \u2208 (M\u0302\u2217ij , 1], proving the optimality of M\u0302ij = M\u0302\u2217ij .\n\u2022 If 2(x\u2212G(z))2ij < \u03bb: \u2202fij \u2202Mij > 0, for Mij \u2208 (0, 1), thus pointing to the optimal M\u0302ij = 0.\nCombining those two cases, introduce (\u00b7)+ := max{\u00b7, 0} and we have\nM\u0302ij =\n( 1\u2212 \u03bb\n2(x\u2212G(z))2ij ) + .\nWe now take the optimal M\u0302 back to f and get\nfij(z, M\u0302 ;\u03bb) = { (x\u2212G(z))2ij , if 2(x\u2212G(z))2ij < \u03bb, \u03bb\u2212 \u03bb 2\n4(x\u2212G(z))2ij , otherwise. (6)\nDefine \u00b5(\u03bb) := n\u0303+ 2\n4 \u03bb \u2265 \u03bb/2,\nwhile also noting that \u00b5 \u2193 0 as \u03bb \u2193 0. For any z \u2208 Sd, define the index set\n\u039b\u03bb(z) = {(i, j) | (x\u2212G(z))2ij > \u00b5(\u03bb)}\nand Z(\u03bb) = {z \u2208 Sd | |\u039b\u03bb(z)| \u2264 n\u0303}.\nWe next show that any z /\u2208 Z(\u03bb) cannot be optimal. For any z /\u2208 Z(\u03bb), we have |\u039b\u03bb(z)| \u2265 n\u0303+ 1. We also note that when (x\u2212G(z))2ij \u2265 \u00b5(\u03bb) \u2265 \u03bb/2, we have\nf\u0302ij(z;\u03bb) > \u03bb\u2212 \u03bb2 4\u00b5(\u03bb) = (1\u2212 1 n\u0303+ 2 )\u03bb.\nTherefore f(z,M ;\u03bb) > (n\u0303+ 1)(1\u2212 1\nn\u0303+ 2 )\u03bb > n\u0303\u03bb = f(z\u0303, M\u0303 ;\u03bb),\nproving the non-optimality of such z.\nAs we limit the optimal solution to Z(\u03bb), we now show that dH\u221e(Z(\u03bb), Z\u0303) \u2193 0 as \u03bb \u2193 0.\nAssume the statement is not true, i.e., there exists \u03f50 > 0 such that dH\u221e(Z(\u03bb), Z\u0303) \u2265 \u03f50 for any \u03bb > 0. Denote \u039e = {\u03bb \u2286 [m]\u00d7 [n] | |\u039b| \u2264 n\u0303}, which is clearly finite. For any \u039b \u2208 \u039e, denote\nZ\u039b(\u03bb) = {z \u2208 Sd | (x\u2212G(z))2ij \u2264 \u00b5(\u03bb), \u2200(i, j) \u2208 \u039b}\nand we have the following decomposition of Z(\u03bb):\nZ(\u03bb) = \u222a\u039b\u2208\u039eZ\u039b(\u03bb),\nwhich is mathematically saying that Z(\u03bb) can be decomposed by enumerating all the possible cases of choosing n\u0303 elements from [m] \u00d7 [n]. Combined with the fact that \u039e is finite, we have dH\u221e(Z(\u03bb), Z\u0303) = max\u039b\u2208\u039e dH\u221e(Z\u039b(\u03bb), Z\u0303) \u2265 \u03f50.\nNote that as Z(\u03bb) decreasing with respect to \u03bb, dH\u221e(Z(\u03bb), Z\u0303) is also decreasing, and the same applies to Z\u039b(\u03bb) for any \u039b \u2208 \u039e. Therefore, there exists a particular \u03a8 \u2208 \u039e such that dH\u221e(Z\u03a8(\u03bb), Z\u0303) \u2265 \u03f50 for any \u03bb > 0 (If not, for any \u039b, there exists \u03bb(\u039b) such that dH\u221e(Z\u039b(\u03bb(\u039b)), Z\u0303) <\n\u03f50, and taking \u03bb\u2032 = min\u039b\u2208\u039e \u03bb(\u039b) we get dH\u221e(Z(\u03bb\u2032), Z\u0303) = max\u039b\u2208\u039e dH\u221e(Z\u039b(\u03bb\u2032), Z\u0303) \u2264 max\u039b\u2208\u039e d H \u221e(Z\u039b(\u03bb(\u039b)), Z\u0303) < \u03f50, contradicting the assumption).\nDenote for this particular \u03a8, U\u03a8(\u03bb) := {z \u2208 Z\u03a8(\u03bb) | d\u221e(z, Z\u0303) \u2265 \u03f50} \u0338= \u2205. Notice that U\u03a8(\u03bb) is compact as it is both closed and bounded, and decreasing with respect to \u03bb. Therefore, let \u03bbi \u2193 0 be any decreasing series to 0, and from Cantor\u2019s intersection theorem, we have\n\u2229\u221ei=0U\u03a8(\u03bbi) \u0338= \u2205. Note that for any z \u2208 \u2229\u221ei=0U\u03a8(\u03bbi), it is clear that (x \u2212 G(z))\u03a8 = 0, i.e., \u2225x \u2212 G(z)\u22250 \u2264 n\u0303 thus z \u2208 Z\u0303 , a contradiction to d\u221e(z, Z\u0303) \u2265 \u03f50.\nFinally, as we have for any z\u0302(\u03bb) optimal to Equation 3, as z\u0302(\u03bb) \u2208 Z(\u03bb) we have 0 \u2264 d\u221e(z\u0302(\u03bb), Z\u0303) \u2264 dH\u221e(Z(\u03bb), Z\u0303) \u2193 0 as \u03bb \u2193 0, or d\u221e(z\u0302(\u03bb), Z\u0303) \u2193 0 as \u03bb \u2193 0, which completes the proof.\nB.2 PROOF TO THEOREM 2\nM\u0303 \u2286 {M \u2208 {0, 1}m\u00d7n | \u2225M\u22250 \u2264 n\u0303} comes straightforward from the definition of Z\u0303 . We now decompose Z\u0303 in the same fashion as Z(\u03bb). For any \u039b \u2208 \u039e let Z\u0303\u039b := {z \u2208 Sd | (x \u2212 G(z))ij = 0, \u2200(i, j) /\u2208 \u039b}, and we have \u222a\u039b\u2208\u039eZ\u0303\u039b = {z \u2208 S | \u2225x \u2212 G(z)\u22250 \u2264 n\u0303} = Z\u0303 , the last quality from the minimality of n\u0303. For the same reason, Z\u0303\u039b is empty unless |\u039b| = n\u0303. Note that Z\u0303\u039b is closed and thus compact following the continuity of G, and thus the compactness of Z\u0303 .\nFor any non-empty Z\u0303\u039b and (i, j) \u2208 \u039b we have infz\u2208Z\u0303\u039b |(x \u2212 G(z))ij | > 0. If not, following the continuity of G, there exists z \u2208 Z\u0303\u039b such that (x \u2212 G(z))ij = 0, so \u2225x \u2212 G(z)\u22250 \u2264 n\u0303 \u2212 1, a contradiction to the minimality of n\u0303. (Note that in the case of Z\u0303\u039b = \u2205, infz\u2208Z\u0303\u039b |(x \u2212 G(z))ij | = +\u221e). Denote s := min\u039b\u2208\u039e min(i,j)\u2208\u039b infz\u2208Z\u0303\u039b |x\u2212G(z)|ij > 0, which is independent from \u03bb.\nGiven the continuity of G, for any \u03f5 > 0, there exists r > 0 such that for any z, z\u2032 \u2208 Sd with \u2225z \u2212 z\u2032\u2225\u221e < r, we have \u2225G(z) \u2212 G(z\u2032)\u2225\u221e < \u03f5. Specifically, we consider s/2 as \u03f5 and have the corresponding rs/2, and we select \u03bb\u0303 > 0 satisfying\ndH\u221e(Z(\u03bb\u0303), Z\u0303) \u2264 rs/2 2 and \u03bb\u0303 \u2264 s\n2\n3(n\u0303+ 1) <\ns2\n2 .\nNotice such a \u03bb\u0303 exists, since dH\u221e(Z(\u03bb\u0303), Z\u0303) \u2193 0 as \u03bb \u2193 0.\nFor any \u03bb \u2264 \u03bb\u0303, consider any optimal solution of Equation 3 as (z\u0302(\u03bb), M\u0302(\u03bb)) and we have there exists z\u0303 \u2208 Z\u0303 such that \u2225z\u0302(\u03bb)\u2212 z\u0302\u2225\u221e = d\u221e(z\u0302(\u03bb), Z\u0303) from the compactness of Z\u0303 . Note that\n\u2225z\u0302(\u03bb)\u2212 z\u0302\u2225\u221e = d\u221e(z\u0302(\u03bb), Z\u0303) \u2264 dH\u221e(Z(\u03bb), Z\u0303) \u2264 dH\u221e(Z(\u03bb\u0303), Z\u0303) \u2264 rs/2/2,\nand thus \u2225G(z\u0302) \u2212 G(z\u0303)\u2225\u221e < s/2. As z\u0303 \u2208 Z\u0303 , there exists \u039b \u2208 \u039e such that z\u0303 \u2208 Z\u0303\u039b, noting that |\u039b| = n\u0303. For any (i, j) \u2208 \u039b, it is clear that |(x \u2212 G(z\u0302(\u03bb)))ij | \u2265 |(x \u2212 G(z\u0303))ij | \u2212 |(G(z\u0302(\u03bb) \u2212 G(z\u0303))ij | \u2265 s\u2212 s/2 = s/2. Therefore, we have \u03bb < 2(x\u2212G(z\u0302(\u03bb)))2ij .\nTherefore, for any (i, j) \u2208 \u039b,\n1 \u2265 M\u0302(\u03bb)ij = 1\u2212 \u03bb 2(x\u2212G(z\u0302(\u03bb)))2ij \u2265 1\u2212 2\u03bb s2 \u2191 1, as \u03bb \u2193 0.\nNote that M\u0302(\u03bb)ij > 0. We also have\nfij(z\u0302(\u03bb), M\u0302(\u03bb);\u03bb) = \u03bb\u2212 \u03bb2 4(x\u2212G(z\u0302(\u03bb)))2ij \u2265 \u03bb\u2212 \u03bb 2 s2 \u2265 \u03bb\u2212 \u03bb 3(n\u0303+ 1)\nwhere the last inequality from \u03bb \u2264 \u03bb\u0303 \u2264 s2/3(n\u0303+ 1).\nNext, we prove that for any (i, j) /\u2208 \u039b, we have M\u0302(\u03bb)ij = 0. Assuming there exists (i\u2032, j\u2032) /\u2208 \u039b such that M\u0302(\u03bb)i\u2032j\u2032 \u0338= 0, we have 2(x\u2212G(z\u0302(\u03bb)))2i\u2032j\u2032 \u2265 \u03bb and thus\nfi\u2032j\u2032(z\u0302(\u03bb), M\u0302(\u03bb);\u03bb) = \u03bb\u2212 \u03bb2 4(x\u2212G(z\u0302(\u03bb)))2i\u2032j\u2032 \u2265 \u03bb 2 .\nTherefore, f(z\u0302(\u03bb), M\u0302(\u03bb);\u03bb) = \u2211\n(i,j)\u2208\u039b\nfij(z\u0302(\u03bb), M\u0302(\u03bb);\u03bb) + \u2211\n(i,j)/\u2208\u039b\nfij(z\u0302(\u03bb), M\u0302(\u03bb);\u03bb)\n\u2265 n\u0303(\u03bb\u2212 \u03bb 3(n\u0303+ 1) ) + \u03bb 2 > n\u0303\u03bb,\nwhich is a contradiction to the optimality of (z\u0302(\u03bb), M\u0302(\u03bb)) since f(z\u0303, M\u0303 ;\u03bb) = \u03bbn\u0303. Therefore, for any (i, j) /\u2208 \u039b, we have M\u0302(\u03bb)ij = 0.\nIn conclusion, let M\u0303 = Ix\u2212G(z\u0303) \u2208 M\u0303 and we have d\u221e(M\u0302(\u03bb),M\u0303) \u2264 \u2225M\u0302(\u03bb), M\u0303\u2225\u221e \u2264 2\u03bb/s2 \u2193 0 as \u03bb \u2193 0. It is also clear that M\u0303 = IM\u0302(\u03bb) as long as \u03bb < \u03bb\u0303, which completes the proof." }, { "heading": "C SIMULATION STUDY", "text": "In this section, we verify the robustness of the RGI method under gross corruptions using simulation.\nData Generation. A Progressive GAN (Karras et al., 2017) network is trained on the training set of 200599 aligned face images of size 128\u00d7 128 from CelebFaces Attributes dataset (CelebA (Liu et al., 2015)) and the pre-trained generator G(\u00b7) is extracted. Then we generate a test image x with central block corruptions (\u2248 25% pixels) by: (i) Sample z \u2208 R500 from the multivariate standard\nnormal distribution, i.e., z N(0, I); (ii) Generate x by xij = { eij , if i, j \u2208 {33, . . . , 96} G(z)ij , otherwise , where eij \u223c N(e, 1) and e is the mean corruption level. The pixel values of images generated by G(\u00b7) are approximately between [\u22121, 1]. To verify the robustness of the RGI method, we vary the mean corruption level e in the range of {\u22121,\u22120.5, 0, 0.5, 1}. The process is repeated to generate 100 input corrupted images for each mean corruption level.\nSolution Procedure For each mean corruption level, we use three methods to restore G(z): (i) l2: Solving Equation 1 with l2 reconstruction loss Lrec(\u00b7) = \u2225 \u00b7 \u222522; (ii) l1: Solving Equation 1 with l1 reconstruction loss Lrec(\u00b7) = \u2225 \u00b7 \u22251; (iii) RGI: Solving Equation 3 with l2 reconstruction loss Lrec(\u00b7) = \u2225 \u00b7 \u222522 . All methods are solved by ADAM (Kingma & Ba, 2014) for 1000 iterations. The root mean squared image restoration error (RMSE) of 100 input images is recorded, i.e.,\nRMSE = \u221a\u221a\u221a\u221a 1 100 100\u2211 i=1 1 mn \u2225G(zi)\u2212G(z\u0302i)\u222522,\nwhere G(z) and G(z\u0302) are the true and restored backgrounds, respectively.\nResults: The RMSE of image restoration results under different corruption levels from methods (i)(iii) is shown in Figure 5. The RGI method demonstrated superior robustness with an RMSE close to zero with respect to all five different corruption levels. l2 and l1 reconstruction losses perform significantly worse under large corruption magnitude, which is expected since they seek an image on the learned manifold that is close to the input image (even though l1 reconstruction loss adds to the robustness a little bit), which can lead to significant deviation of the image restoration." }, { "heading": "D DETAILED DISCUSSION ON SEMANTIC IMAGE INPAINTING", "text": "D.1 IMPLEMENTATION DETAILS\nA Progressive GAN (Karras et al., 2017) network is used as the backbone network. Notice that the discriminator network is usually used to regularize the generated image, such that the generated image looks real. Different methods incorporate the discriminator differently. It can either be incorporated as a separate penalty term in the objective function (Yeh et al., 2017), incorporated as a modified reconstruction loss term (Pan et al., 2021) or ignored in the loss function (Gu et al., 2020). For a fair comparison, we use a weighted combination of an l2 norm (with weight 1) and a discriminator penalty term (with weight 0.1) as the reconstruction loss for comparison methods. In all\nexperiments, the optimization problems Equation 1 (GAN-inversion), Equation 3 (RGI) and Equation 4 (R-RGI) are solved by ADAM (Kingma & Ba, 2014) for 2000 iterations, with a learning rate of 0.1 for both z and M . For Equation 4 (R-RGI), we use the last 500 iterations for mask-free finetuning with the learning rate of 1e\u22125 for \u03b8. The tuning parameter \u03bb is selected by cross-validation. Notice that using SSIM and PSNR gives different corss-validation results and here we report both:\n(i) CelebA: block missing RGI and R-RGI: \u03bbSSIM = \u03bbPSNR = 0.07; random missing RGI: \u03bbSSIM = 0.2, \u03bbPSNR = 0.5, R-RGI: \u03bbSSIM = 0.25, \u03bbPSNR = 0.6.\n(ii) Stanford cars: block missing RGI \u03bbSSIM = 0.9, \u03bbPSNR = 1.0, R-RGI: \u03bbSSIM = \u03bbPSNR = 0.9; random missing RGI: \u03bbSSIM = \u03bbPSNR = 1.0, R-RGI: \u03bbSSIM = \u03bbPSNR = 0.8.\n(iii) LSUN bedroom: block missing RGI: \u03bbSSIM = \u03bbPSNR = 0.8, R-RGI: \u03bbSSIM = \u03bbPSNR = 0.7; random missing RGI: \u03bbSSIM = 0.8, \u03bbPSNR = 1.0, R-RGI: \u03bbSSIM = 0.6, \u03bbPSNR = 0.9.\nD.2 DATASETS DETAILS\nCelebA (Liu et al., 2015) contains a training set of 200,599 aligned face images. We resize them to the size of 128 \u00d7 128. We use the remaining 2000 images as the test set. Missing regions are generated as follows: (i) central block missing of size 32 \u00d7 32 and (ii) random missing (\u2248 50% pixels). We fill in the missing entry with pixels from N(\u22121, 1). We randomly select 100 test images to evaluate algorithm performance.\nStanford cars (Krause et al., 2013) contains 16,185 images of 196 classes of cars and is split into 8,144 training images and 8,041 testing images. We crop the image based on the provided bounding boxes and resize them to the size of 128\u00d7 128. Missing regions are generated as follows: (i) central block missing of size 16\u00d7 16 and (ii) random missing (\u2248 25% pixels). We fill in the missing entry with pixels from N(\u22121, 1). The training and test set partitions provided by the dataset are used. We randomly select 100 test images to evaluate algorithm performance.\nLSUN bedroom (Yu et al., 2015) contains 3,033,042 images for training and 300 images for validation. We resize the images to the size of 128\u00d7 128. Missing regions are generated as follows: (i) central block missing of size 16\u00d716 and (ii) random missing (\u2248 25% pixels). We fill in the missing entry with pixels from N(\u22121, 1). We randomly select 100 images from the validation set to evaluate algorithm performance.\nNext, we will show the qualitative image restoration results on these datasets. Notice that we avoid showing the CelebA result due to copyright/privacy concerns.\nD.3 QUALITATIVE IMAGE RESTORATION RESULTS\nFigure 6 shows the qualitative image restoration results on Stanford cars (Krause et al., 2013) dataset. From columns 2-4, we can observe that the RGI method has a comparable performance as (Yeh\net al., 2017) w/ mask, which improves the image restoration performance of (Yeh et al., 2017) w/o mask. However, the performance of Yeh et al. (2017) even with mask information is not satisfactory (for example, the second row of Figure 6), this is mainly due to the GAN approximation gap (Pan et al., 2021). In this case, further generator fine-tuning will significantly improve the faithfulness of restored images, which can be observed from columns 5-6.\nFigure 7shows the qualitative image restoration results on the LSUN bedroom (Yu et al., 2015) dataset. A similar conclusion can also be drawn." }, { "heading": "E DETAILED RESULTS ON THE MVTEC AD DATASET", "text": "Annotation issues of the MVTec AD dataset. Figure 8 shows example images from the MVTec dataset as well as the corresponding annotations. It is clear that the annotation covers a larger area than the exact defect contour, which will favor localization level methods such as PatchCore (Roth et al., 2022). However, this level of annotation is neither sufficient to fulfill the fine-grained surface quality inspection goal, such as providing precise defect specifications (i.e. diameter, length, area) for product surface quality screening, nor serve as an effective dataset for training/evaluating pixelwise anomaly detection algorithms.\nQualitative assessment on MVTec AD. Figure 9 shows the qualitative results of the wood product. We can observe that both the performance of RGI and AnoGAN are poor, where the restored images are far from the true background, which leads to a noisy anomalous segmentation mask. The main reason is the small size of the training set, where the learned generator tends to overfit (memorize) the training set (Karras et al., 2020; Webster et al., 2019) rather than generalize to images in the test set. This will lead to a huge gap between the learned training image manifold and the testing image manifold. By generator fine-tuning, the R-RGI can mitigate this gap to improve both the background reconstruction and anomalous region identification performance.\nHowever, the success of the RGI/R-RGI method is built upon the assumption of a large training dataset such that the generator can learn a reasonable manifold which can generalize to unseen test samples. Mask-free fine-tuning can then mitigate the GAN approximation gap to further improve its performance. When the training set size is too small, where the generator tends to overfit (memorize) the training set, merely relying on the mask-free fine-tuning can lead to an unstable result." }, { "heading": "F SYNTHETIC DEFECT GENERATION ON BTAD PRODUCT03", "text": "The detailed defect generation process is discussed in this section. Product03 has 1000 defective free images, from which we randomly select 100 images for defect generation. To improve the\nfaithfulness of generated defective images, we collect the binary defective region masks (equals to 1 for defective pixels and 0 otherwise) from the annotations of the MVTec AD (Bergmann et al., 2019) dataset and organize them into 4 categories, including crack, irregular, scratch, and mixed large (defective region area larger than 400 pixels) type of defective region masks. Then, we generate the synthetic defective image xsys,ji by:\nxsys,ji = (1\u2212M j i )\u2299 xi +M j i \u2299 C j i , i \u2208 [1, ...100], j \u2208 {crack, irregular, scratch,mixed large},\nwhere xi is the ith input defect-free image, M j i is the ith randomly selected mask from the jth category. Cji \u2208 Rm\u00d7n\u00d73 is an image with constant channel values to fill in the defective region. To avoid trivial anomaly detection, we set Cji as the average pixel value of the defective region, i.e.,\nC[:, :, k]ji =\n\u2211 p1\u2208[m],p2\u2208[n] M\nj i [p1, p2, k]xi[p1, p2, k]\u2211\np1\u2208[m],p2\u2208[n] M j i [p1, p2, k]\n, k \u2208 [3].\nFinally we have 4 categories of synthetic defects with 100 defective images in each category. Examples of generated defective images are shown in Figure 10. We can observe that the defects are close to the background color, which makes them hard to distinguish even with human eyes. This avoids trail defect detection." }, { "heading": "G DETAILED RESULTS ON THE BTAD DATASET", "text": "G.1 IMPLEMENTATION DETAILS\nWe use a PGGAN (Karras et al., 2017) as the backbone network and a l2 norm reconstruction loss term (Lrec) for RGI and R-RGI methods. The tuning parameter \u03bb is selected via cross validation by using Dice coefficient as the metric. For RGI, the following values are selected: \u03bbcrack = \u03bbirregular = \u03bbscratch = \u03bbmixed large = 0.4. For R-RGI, the following values are selected: \u03bbcrack = 0.12, \u03bbirregular = 0.1, \u03bbscratch = 0.14, \u03bbmixed large = 0.12. All optimization problems are solved by ADAM (Kingma & Ba, 2014) for 2000 iterations, with a learning rate of 0.1 for both z and M . For R-RGI, we use the last 1500 iterations for mask-free fine-tuning with the learning rate of 1e\u22125 for \u03b8.\nG.2 QUALITATIVE RESULTS" } ], "year": 2022, "abstractText": "Generative adversarial networks (GANs), trained on a large-scale image dataset, can be a good approximator of the natural image manifold. GAN-inversion, using a pre-trained generator as a deep generative prior, is a promising tool for image restoration under corruptions. However, the performance of GAN-inversion can be limited by a lack of robustness to unknown gross corruptions, i.e., the restored image might easily deviate from the ground truth. In this paper, we propose a Robust GAN-inversion (RGI) method with a provable robustness guarantee to achieve image restoration under unknown gross corruptions, where a small fraction of pixels are completely corrupted. Under mild assumptions, we show that the restored image and the identified corrupted region mask converge asymptotically to the ground truth. Moreover, we extend RGI to Relaxed-RGI (R-RGI) for generator fine-tuning to mitigate the gap between the GAN learned manifold and the true image manifold while avoiding trivial overfitting to the corrupted input image, which further improves the image restoration and corrupted region mask identification performance. The proposed RGI/R-RGI method unifies two important applications with state-of-the-art (SOTA) performance: (i) mask-free semantic inpainting, where the corruptions are unknown missing regions, the restored background can be used to restore the missing content. (ii) unsupervised pixelwise anomaly detection, where the corruptions are unknown anomalous regions, the retrieved mask can be used as the anomalous region\u2019s segmentation mask.", "creator": "LaTeX with hyperref" }, "output": [ [ "1. The proposed RGI or R-RGI methods require to solve the large optimization problems during the inference. There is no discussion on the computational cost involved in solving the large optimization problems during the inference. It could be very computationally infeasible for practical application.", "2. Another limitation with the GAN inversion approach is that it requires a large number of normal images to train a GAN model for the specific task. In many cases, there are not enough numbers of normal images for training the GAN models. In their experiments on image inpainting, the datasets used in the experiments are CelebA, Standard cars, and LSUN bedroom all contain very large numbers of images for the GAN model training. But for MVTec and BTAD datasets, the numbers of training images are not a lot. There is no discussion on the numbers of images used in the model training for the implementation on anomaly detection experiments. There should be more discussion on the issue of training data requirement for the proposed method.", "3. The experiments on pixel-wise anomaly detection are only performed on their synthetic defect dataset based on BTAD dataset. The experimental evaluation is not sufficient to demonstrate the proposed RGI or R-RGI achieve SOTA performance for anomaly detection. They should perform experiments with the standard protocol on the commonly used datasets, such as MVTec and BTAD datasets and include experimental comparison with some recent SOTA methods.", "4. The discussion in section 3.3 claims the proposed RGI method is connected to robust statistics and the loss function in eq. 3 can be simplified by using the robust error function, which avoids introducing M in the loss function. However, there is no justification of this claim. If this is the case, is the resulting optimization problem also simpler to solve?" ], [ "1. \"The idea is simple, effective and intuitive, but the problem setting is not that practical.\"", "2. \"Center block is too easy to be overfitted, and it cannot reveal the advantages of the mask learning.\"", "3. \"Random missing is also not practical enough, and a simple denoising method may be more effective in resolving the problem.\"", "4. \"Synthetic defects are limited, and cannot be proven to be generalized to real tasks.\"", "5. \"The sparsity assumption is limited.\"", "6. \"Similar ideas can also be found in paper [1], while the authors of that inpainting paper proposed to jointly optimize the mask and reconstruction loss during training.\"", "7. \"Finetunning the generator may be not that practical for real applications.\"" ], [ "1. The trade-off parameter \u03bb seems to be an important value, but the article does not show the sensitivity of the method to lambda.", "2. The proof may not make much sense.", "3. Its biggest problem is that some assumptions basically do not appear in real-world applications.", "4. This reduces the contribution of this paper, because it seems that this guarantee is not important." ], [ "1. The experiment seems not thorough.", "2. I'm curious about why they only compare limited cases in the experiment.", "3. I think the method should compare with more previous works and datasets to demonstrate its efficiency.", "4. I think the paper should add an ablation study to demonstrate the method." ], [ "1. \"I am not sure whether this method is still in this way. If they still solve this problem in this manner and show poor generalization ability to other scenarios, I donot think the GAN-Inversion is a big plus.\"", "2. \"To relive my concern, please provide more details in the rebuttal about the details of training data, for example, if you want to get the result on CELEBA dataset, do you need to use similar data to train your network for the generative image prior?\"", "3. \"Is it ok to show more results beyond current established dataset?\"", "4. \"Is it ok to provide more details about Figure 2? From the figure, this work still relies on some kinds of mask to achieve the results, though this mask is not explicitly obtained. However, considering the examples in this figure, where the mask is with significant differences to its surrounding regions, it may not be very difficult to estimate such a mask.\"", "5. \"Image inpainting has already been a rat race. Many methods have been proposed. Why do you only compare with so limited methods?\"" ] ], "review_num": 5, "item_num": [ 4, 7, 4, 4, 5 ] }