|
{ |
|
"ID": "-k7Lvk0GpBl", |
|
"Title": "Localized Randomized Smoothing for Collective Robustness Certification", |
|
"Keywords": "Robustness, Certification, Verification, Trustworthiness, Graph neural networks", |
|
"URL": "https://openreview.net/forum?id=-k7Lvk0GpBl", |
|
"paper_draft_url": "/references/pdf?id=CkizBZOA4-", |
|
"Conferece": "ICLR_2023", |
|
"track": "Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)", |
|
"acceptance": "Accept: notable-top-25%", |
|
"review_scores": "[['3', '6', '4'], ['4', '8', '4'], ['4', '8', '4']]", |
|
"input": { |
|
"source": "CRF", |
|
"title": "LOCALIZED RANDOMIZED SMOOTHING", |
|
"authors": [], |
|
"emails": [], |
|
"sections": [ |
|
{ |
|
"heading": "1 INTRODUCTION", |
|
"text": "There is a wide range of tasks that require models making multiple predictions based on a single input. For example, semantic segmentation requires assigning a label to each pixel in an image. When deploying such multi-output classifiers in practice, their robustness should be a key concern. After all \u2013 just like simple classifiers (Szegedy et al., 2014) \u2013 they can fall victim to adversarial attacks (Xie et al., 2017; Zu\u0308gner & Gu\u0308nnemann, 2019; Belinkov & Bisk, 2018). Even without an adversary, random noise or measuring errors can cause predictions to unexpectedly change.\nWe propose a novel method providing provable guarantees on how many predictions can be changed by an adversary. As all outputs operate on the same input, they have to be attacked simultaneously by choosing a single perturbed input, which can be more challenging for an adversary than attacking them independently. We must account for this to obtain a proper collective robustness certificate.\nThe only dedicated collective certificate that goes beyond certifying each output independently (Schuchardt et al., 2021) is only beneficial for models we call strictly local, where each output depends on a small, pre-defined subset of the input. Multi-output classifiers , however, are often only softly local. While all their predictions are in principle dependent on the entire input, each output may assign different importance to different subsets. For example, convolutional networks for image segmentation can have small effective receptive fields (Luo et al., 2016; Liu et al., 2018), i.e. primarily use a small region of the image in labeling each pixel. Many models for node classification are based on the homophily assumption that connected nodes are mostly of the same class. Thus, they primarily use features from neighboring nodes. Transformers, which can in principle attend to arbitrary parts of the input, may in practice learn \u201dsparse\u201d attention maps, with the prediction for each token being mostly determined by a few (not necessarily nearby) tokens (Shi et al., 2021).\nSoftly local models pose a budget allocation problem for an adversary that tries to simultaneously manipulate multiple predictions by crafting a single perturbed input. When each output is primarily focused on a different part of the input, the attacker has to distribute their limited adversarial budget and may be unable to attack all predictions at once.\nWe propose localized randomized smoothing, a novel method for the collective robustness certification of softly local models that exploits this budget allocation problem. It is an extension of randomized smoothing (Le\u0301cuyer et al., 2019; Cohen et al., 2019), a versatile black-box certification method which is based on constructing a smoothed classifier that returns the expected prediction of a model under random perturbations of its input (more details in \u00a7 2). Randomized smoothing is typically applied to single-output models with isotropic Gaussian noise. In localized smoothing however, we smooth each output (or set of outputs) of a multi-output classifier using a different distribution that is anisotropic. This is illustrated in Fig. 1, where the predicted segmentation masks for each grid cell are smoothed using a different distribution. For instance, the distribution for segmenting the top-right cell applies less noise to the top-right cell. The smoothing distribution for segmenting the bottom-left cell applies significantly more noise to the top-right cell.\nGiven a specific output of a softly local model, using a low noise level for the most relevant parts of the input lets us preserve a high prediction quality. Less relevant parts can be smoothed with a higher noise level to guarantee more robustness. The resulting certificates (one per output) explicitly quantify how robust each prediction is to perturbations of which part of the input. This information about the smoothed model\u2019s locality can then be used to combine the per-prediction certificates into a stronger collective certificate that accounts for the adversary\u2019s budget allocation problem.\nOur core contributions are: \u2022 Localized randomized smoothing, a novel smoothing scheme for multi-output classifiers. \u2022 An efficient anisotropic randomized smoothing certificate for discrete data. \u2022 A collective certificate based on localized randomized smoothing." |
|
}, |
|
{ |
|
"heading": "2 BACKGROUND AND RELATED WORK", |
|
"text": "Randomized smoothing. Randomized smoothing is a certification technique that can be used for various threat models and tasks. For the sake of exposition, let us discuss a certificate for l2 perturbations (Cohen et al., 2019). Assume we have a D-dimensional input space RD, label set Y and classifier g : RD \u2192 Y. We can use isotropic Gaussian noise to construct the smoothed classifier f = argmaxy\u2208Y Prz\u223cN (x,\u03c3) [g(z) = y] that returns the most likely prediction of base classifier g under the input distribution1. Given an input x \u2208 RD and smoothed prediction y = f(x), we can then easily determine whether y is robust to all l2 perturbations of magnitude \u03f5, i.e. whether \u2200x\u2032 : ||x\u2032 \u2212 x||2 \u2264 \u03f5 : f(x\u2032) = y. Let q = Prz\u223cN (x,\u03c3) [g(z) = y] be the probability of predicting label y. The prediction is certifiably robust if \u03f5 < \u03c3\u03a6\u22121(q) (Cohen et al., 2019). This result showcases a trade-off inherent to randomized smoothing: Increasing the noise level (\u03c3) may strengthen the certificate, but could also lower the accuracy of f or reduce q and thus weaken the certificate.\nWhite-box certificates for multi-output classifiers. There are multiple recent methods for certifying the robustness of specific multi-output models by analyzing their specific architecture\n1In practice, all probabilities have to be estimated using Monte Carlo sampling (see discussion in \u00a7 G).\nand weights (for example, see (Tran et al., 2021; Zu\u0308gner & Gu\u0308nnemann, 2019; Bojchevski & Gu\u0308nnemann, 2019; Zu\u0308gner & Gu\u0308nnemann, 2020; Ko et al., 2019; Ryou et al., 2021; Shi et al., 2020; Bonaert et al., 2021)). They are however not designed to certify collective robustness, i.e. determine whether multiple outputs can be simultaneously attacked using a single perturbed input. They can only determine independently for each prediction whether or not it can be attacked.\nCollective robustness certificates. Most directly related to our work is the aforementioned certificate of Schuchardt et al. (2021), which is only beneficial for strictly local models (i.e. multi-output models where each output has a small receptive field). In \u00a7 I we show that, for randomly smoothed models, their certificate is a special case of our certificate. SegCertify (Fischer et al., 2021) is a collective certificate for semantic segmentation. This method certifies each output independently using isotropic randomized smoothing (ignoring the budget allocation problem) and uses Holm correction (Holm, 1979) to obtain tighter Monte Carlo estimates. It then counts the number of certifiably robust predictions and tests whether it equals the number of predictions. In \u00a7 H we demonstrate that our method can always provide guarantees that are at least as strong. Another method that can in principle be used to certify collective robustness is center smoothing (Kumar & Goldstein, 2021). It bounds the change of a vector-valued function w.r.t to a distance function. With the l0 pseudo-norm as the distance function, it can bound how many predictions can be simultaneously changed. More recently, Chen et al. (2022) proposed a collective certificate for bagging classifiers. Different from our and prior work, they consider poisoning (train-time) instead of evasion (test-time) attacks.\nAnisotropic randomized smoothing. While only designed for single-output classifiers, two recent certificates for anisotropic Gaussian and uniform smoothing (Fischer et al., 2020; Eiras et al., 2021) can be used as a component of our collective certification approach: They can serve as per-prediction certificates, which we can then combine into our stronger collective certificate (more details in \u00a7 3.2)." |
|
}, |
|
{ |
|
"heading": "3 PRELIMINARIES", |
|
"text": "" |
|
}, |
|
{ |
|
"heading": "3.1 COLLECTIVE THREAT MODEL", |
|
"text": "We assume a multi-output classifier f : XDin \u2192 YDout , that maps Din-dimensional inputs to Dout labels from label set Y. We further assume that this classifier f is the result of randomly smoothing each output of a base classifier g. Given this multi-output classifier f , an input x \u2208 XDin and the corresponding predictions y = f(x), the objective of the adversary is to cause as many predictions from a set of targeted indices T \u2286 {1, . . . , Dout} to change. That is, their objective is minx\u2032\u2208Bx \u2211 n\u2208T I [fn(x\n\u2032) = yn], where I is the indicator function and Bx \u2286 XDin is the perturbation model. As is common in robustness certification, we assume a norm-bound perturbation model, i.e. Bx = { x\u2032 \u2208 XDin | ||x\u2032 \u2212 x||p \u2264 \u03f5 } with p, \u03f5 \u2265 0. Importantly, note that the minimization operator is outside the sum, meaning the predictions have to be attacked using a single input." |
|
}, |
|
{ |
|
"heading": "3.2 A RECIPE FOR COLLECTIVE CERTIFICATES", |
|
"text": "Before discussing localized randomized smoothing, we show how to combine arbitrary perprediction certificates into a collective certificate, a procedure that underlies both our method and that of Schuchardt et al. (2021) and Fischer et al. (2021). The first step is to apply an arbitrary certification procedure to each prediction y1, . . . , yDout in order to obtain per-prediction base certificates.\nDefinition 3.1 (Base certificates). A base certificate for a prediction yn = fn(x) is a set H(n) \u2286 XDin of perturbed inputs s.t. \u2200x\u2032 \u2208 H(n) : fn(x\u2032) = yn.\nUsing these base certificates, one can derive two bounds on the adversary\u2019s objective:\nmin x\u2032\u2208Bx \u2211 n\u2208T I [fn(x \u2032) = yn] \u2265 (1.1) min x\u2032\u2208Bx \u2211 n\u2208T I [ x\u2032 \u2208 H(n) ] \u2265 (1.2) \u2211 n\u2208T min x\u2032\u2208Bx I [ x\u2032 \u2208 H(n) ] . (1)\nEq. 1.1 follows from Definition 3.1 (if a prediction is certifiably robust to x\u2032, then fn(x\u2032) = yn), while Eq. 1.2 results from moving the min operator inside the summation.\nEq. 1.2 is the na\u0131\u0308ve collective certificate: It iterates over the predictions and counts how many are certifiably robust to perturbation model Bx. Each summand involves a separate minimization\nproblem. Thus, the certificate neglects that the adversary has to choose a single perturbed input to attack all outputs. SegCertify (Fischer et al., 2021) applies this to isotropic Gaussian smoothing.\nWhile Eq. 1.1 is seemingly tighter than the na\u0131\u0308ve collective certificate, it may lead to identical results. For example, let us consider the most common case where the base certificates guarantee robustness within an lp ball, i.e. H(n) = { x\u2032\u2032 | ||x\u2032\u2032 \u2212 x||p \u2264 r(n) } with certified radii r(n). Then, the optimal solution to both Eq. 1.1 and Eq. 1.2 is to choose an arbitrary x\u2032 with ||x\u2032 \u2212 x|| = \u03f5:\nmin x\u2032\u2208Bx \u2211 n\u2208T I [ x\u2032 \u2208 H(n) ] = \u2211 n\u2208T I [ \u03f5 < r(n) ] = \u2211 n\u2208T min x\u2032\u2208Bx I [ x\u2032 \u2208 H(n) ] .\nThe main contribution of Schuchardt et al. (2021) is to notice that, by exploiting strict locality (i.e. the outputs having small receptive fields), one can augment certificate Eq. 1.1 to make it tighter than the naive collective certificate from Eq. 1.2. One must simply mask out all perturbations falling outside a given receptive field when evaluating the corresponding base certificate:\nmin x\u2032\u2208Bx \u2211 n\u2208T I [( \u03c8(n) \u2299 x\u2032 + (1\u2212\u03c8(n))\u2299 x ) \u2208 H(n) ] .\nHere, \u03c8(n) \u2208 {0, 1}Din encodes the receptive field of fn and \u2299 is the elementwise product. If two outputs fn and fm have disjoint receptive fields (i.e. \u03c8(n) T \u03c8(m) = 0), then the adversary has to split up their limited adversarial budget and may be unable to attack both at once." |
|
}, |
|
{ |
|
"heading": "4 LOCALIZED RANDOMIZED SMOOTHING", |
|
"text": "The core idea behind localized smoothing is that, rather than improving upon the na\u0131\u0308ve collective certificate by using external knowledge about strict locality, we can use anisotropic randomized smoothing to obtain base certificates that directly encode soft locality. Here, we explain our approach in a domain-independent manner before turning to specific distributions and data-types in \u00a7 5.\nIn localized randomized smoothing, we associate the outputs g1, . . . , gDout of the base classifier with their own, distinct anisotropic2 smoothing distributions \u03a8(1)x , . . . ,\u03a8 (Dout) x that depend on input x. For example, they could be different anisotropic Gaussian distributions with mean x and distinct covariance matrices \u2013 like in Fig. 1, where we use a different distribution for the segmentation of each grid cell. We then use these distributions to construct the smoothed classifier f , where each output fn(x) is the result of randomly smoothing gn(Z) with \u03a8 (n) x .\nFinally, to certify robustness for a vector of predictions y = f(x), we follow the procedure discussed in \u00a7 3.2, i.e. compute per-prediction base certificates H(1), . . . ,H(Dout) and solve optimization problem Eq. 1.1. We do not make any assumption about how the base certificates are computed. However, we require that they comply with a common interface, which will later allow us to evaluate the collective certificate via linear programming:\nDefinition 4.1 (Base certificate interface). A base certificate H(n) \u2286 XDin is compliant with our base certificate interface if there is a w \u2208 RDin+ and \u03b7(n) \u2208 R+ such that\nH(n) = { x\u2032 \u2223\u2223\u2223\u2223\u2223 Din\u2211 d=1 w (n) d \u00b7 |x \u2032 d \u2212 xd|p < \u03b7(n) } , (2)\nwhere lp is the norm of our collective perturbation model and xd is the d-th element of the vector x.\nThe weight w(n)d quantifies how sensitive yn is to perturbations of input dimension d. It will be smaller where the anisotropic smoothing distribution applies more noise. The radius \u03b7(n) quantifies the overall level of robustness. In \u00a7 5 we present different distributions and corresponding certificates that comply with this interface. Inserting Eq. 2 into Eq. 1.1 results in the collective certificate\nmin x\u2032\u2208Bx \u2211 n\u2208T I [ Din\u2211 d=1 w (n) d \u00b7 |x \u2032 d \u2212 xd|p < \u03b7(n) ] . (3)\n2We denote any distribution that applies different levels of noise to different input dimensions anisotropic.\nEq. 3 showcases why locally smoothed models admit a collective certificate that is stronger than na\u0131\u0308vely certifying each output independently (i.e. Eq. 1.2). Because we use different distributions for different outputs, any two outputs f (n) and f (m) will have distinct certificate weights w(n) and w(m). If they are sensitive in different parts of the input, i.e. w(n) T w(m) is small, then the adversary has to split up their limited adversarial budget and may be unable to attack both at once. One particularly simple example is the case w(n) T w(m) = 0, where attacking predictions yn and ym requires allocating adversarial budget to two entirely disjoint sets of input dimensions.\nIn \u00a7 I we show that, with appropriately parameterized smoothing distributions, we can obtain base certificates withw(n) = c \u00b7\u03c8(n), with indicator vector\u03c8(n) encoding the receptive field of output n. Hence, the collective guarantees from (Schuchardt et al., 2021) are a special case of our certificate." |
|
}, |
|
{ |
|
"heading": "4.1 COMPUTING THE COLLECTIVE CERTIFICATE", |
|
"text": "Because our interface describes base certificates via linear constraints, we can formulate Eq. 3, as an equivalent mixed-integer linear program (MILP). This leads us to our main result (proof in \u00a7 D):\nTheorem 4.2. Given locally smoothed model f , input x \u2208 X(Din), smoothed prediction yn = f(x) and base certificates H(1), . . . ,HDout complying with interface Eq. 2, the number of simultaneously robust predictions minx\u2032\u2208Bx \u2211 n\u2208T I [fn(x \u2032) = yn] is lower-bounded by\nmin b\u2208RDin+ ,t\u2208{0,1}Dout \u2211 n\u2208T tn (4)\ns.t. \u2200n : bTw(n) \u2265 (1\u2212 tn)\u03b7(n), sum{b} \u2264 \u03f5p. (5)\nThe vector b models the allocation of adversarial budget (i.e. the distances bd = |x\u2032d \u2212 xd|p from clean input x). The vector t models which predictions are robust. Eq. 5 ensures that b does not exceed the overall budget \u03f5 and that tn can only be set to 0 if bTw(n) \u2265 \u03b7(n), i.e. only when the base certificate cannot guarantee robustness. This problem can be solved using any MILP solver." |
|
}, |
|
{ |
|
"heading": "4.2 IMPROVING EFFICIENCY", |
|
"text": "Solving large MILPs is expensive. In \u00a7 E we show that partitioning the outputs into Nout subsets sharing the same smoothing distribution and the inputs into Nin subsets sharing the same noise level (for example like in Fig. 1, where we partition the image into a 2\u00d7 3 grid), as well as quantizing the base certificate parameters \u03b7(n) intoNbin bins, reduces the number of variables and constraints from Din+Dout andDout+1 toNin+Nout \u00b7Nbins andNout \u00b7Nbins+1, respectively. We can thus control the problem size independent of the data\u2019s dimensionality. We further derive a linear relaxation of the MILP that can be efficiently solved while preserving the soundness of the certificate." |
|
}, |
|
{ |
|
"heading": "4.3 ACCURACY-ROBUSTNESS TRADEOFF", |
|
"text": "When discussing Eq. 3, we only explained why our collective certificate for locally smoothed models is better than a na\u0131\u0308ve combination of localized smoothing base certificates. However, this does not necessarily mean that our certificate is also stronger than na\u0131\u0308vely certifying an isotropically smoothed model. This is why we focus on soft locality. With isotropic smoothing, high certified robustness requires using large noise levels, which degrade the model\u2019s prediction quality. Localized smoothing, when applied to softly local models, can circumvent this issue. For each output, we can use low noise levels for the most important parts of the input to retain high prediction quality. Our LP-based collective certificate allows us to still provide strong collective robustness guarantees. We investigate this improved accuracy-robustness trade-off in our experimental evaluation (see \u00a7 7)." |
|
}, |
|
{ |
|
"heading": "5 BASE CERTIFICATES", |
|
"text": "To apply our collective certificate in practice, we require smoothing distributions \u03a8(n)x and corresponding per-prediction base certificates that comply with the interface from Definition 3.1.\nAs base certificates for l2 and l1 perturbations we can reformulate existing anisotropic Gaussian (Kumar & Goldstein, 2021; Fischer et al., 2021) and uniform (Kumar & Goldstein, 2021) smoothing certificates for single-output models: For \u03a8(n)x = N (x,diag(s(n))) we have w(n)d = 1/(s (n) d ) 2 and \u03b7(n) = (\u03a6\u22121(qn,yn)) 2 with qn,yn = Prz\u223c\u03a8(n)x [gn(z) = y]. For \u03a8 (n) x = U ( x,\u03bb(n) ) we have w (n) d = 1/\u03bb (n) d and \u03b7 (n) = \u03a6\u22121 (qn,yn). We prove the correctness of these reformulations in \u00a7 F.\nFor l0 perturbations of binary data, we can use a distribution F(x,\u03b8) that flips xd with probability \u03b8d \u2208 [0, 1], i.e. Pr[zd \u0338= xd] = \u03b8d for z \u223c F(x,\u03b8). Existing methods (e.g. (Lee et al., 2019)) can be used to derive per-prediction certificates for this distribution, but have exponential runtime in the number of unique values in \u03b8. Thus, they are not suitable for localized smoothing, which uses different \u03b8d for different parts of the input. We therefore propose a novel, more efficient approach: Variance-constrained certification, which smooths the base classifier\u2019s softmax scores instead of its predictions and then uses both their expected value and variance to certify robustness (proof in \u00a7 F.3): Theorem 5.1 (Variance-constrained certification). Given a model g : X \u2192 \u2206|Y| mapping from discrete set X to scores from the (|Y| \u2212 1)-dimensional probability simplex, let f(x) = argmaxy\u2208YEz\u223c\u03a8x [g(z)y] with smoothing distribution \u03a8x and probability mass function \u03c0x(z) = Prz\u0303\u223c\u03a8x [z\u0303 = z]. Given an input x \u2208 X and smoothed prediction y = f(x), let \u00b5 = Ez\u223c\u03a8x [g(z)y] and \u03b6 = Ez\u223c\u03a8x [ (g(z)y \u2212 \u03bd)2 ] with \u03bd \u2208 R. Assuming \u03bd \u2264 \u00b5, then f(x\u2032) = y if\n\u2211 z\u2208X \u03c0x\u2032(z) 2 \u03c0x(z) < 1 +\n1\n\u03b6 \u2212 (\u00b5\u2212 \u03bd)2\n( \u00b5\u2212 1\n2\n) . (6)\nApplying Theorem 5.1 to flipping distribution F(x,\u03b8) yields per-prediction base certificates for l0 attacks that comply with our interface and can be computed in linear time (see Appendix F.3.1).\nVariance-constrained certification is a general-purpose method that can be applied to arbitrary domains and distributions for which the the left-hand side of Eq. 6 can be computed3. In \u00a7 F.3.2, we apply it to the sparsity-aware smoothing distribution (Bojchevski et al., 2020), making it possible to differentiate between adversarial deletions and additions of input bits while preserving data sparsity." |
|
}, |
|
{ |
|
"heading": "6 LIMITATIONS", |
|
"text": "The main limitation of our approach is that it assumes soft locality of the base model. It can be applied to arbitrary models, but may not necessarily result in better certificates than isotropic smoothing (recall our discussion in Section 4.3). Also, choosing the smoothing distributions requires some a-priori knowledge or assumptions about which parts of the input are how relevant to making a prediction. Our experiments show that natural assumptions like homophily can be sufficient for choosing effective smoothing distributions. But doing so in other tasks may be more challenging.\nA limitation of (most) randomized smoothing methods is that they use sampling to approximate the smoothed classifier. Because we use different distributions for different outputs, we can only use a fraction of the samples per output. As discussed in \u00a7 E.1, we can alleviate this problem by sharing smoothing distributions among multiple outputs. Our experiments show that despite this issue, our method can outperform certificates that use a single isotropic distribution. Still, future work should try to improve the sample efficiency of randomized smoothing (e.g. via derandomization (Levine & Feizi, 2020)), which could then be incorporated into our localized smoothing framework." |
|
}, |
|
{ |
|
"heading": "7 EXPERIMENTAL EVALUATION", |
|
"text": "In this section, we compare our method to all existing collective robustness certificates: Center smoothing using isotropic Gaussian noise (Kumar & Goldstein, 2021), SegCertify (Fischer et al., 2021) and the collective certificates of Schuchardt et al. (2021). To allow SegCertify to be compared to the other methods, we report the number of certifiably robust predictions and not just whether all predictions are robust. We write SegCertify* to highlight this. Because we consider models that are not strictly local (i.e. all outputs depend on all inputs) the certificates of Schuchardt et al.\n3It can also be applied to continuous distributions by replacing the sum with an integral.\n(2021) and Fischer et al. (2021) are identical, i.e., do not have to be evaluated separately. A more detailed description of the experimental setup, as well as the used hardware can be found in \u00a7 C. An implementation will be made available to reviewers via an anonymous link posted on OpenReview.\nMetrics. Evaluating randomized smoothing methods based on certificate strength alone is not sufficient. Different distributions lead to different tradeoffs between prediction quality and certifiable robustness (as discussed in \u00a7 4.3). As metrics for prediction quality, we use accuracy and mean intersection over union (mIOU)4. The main metric for certificate strength is the certified accuracy \u03be(\u03f5), i.e., the percentage of predictions that are correct and certifiably robust, given adversarial budget \u03f5. Following (Schuchardt et al., 2021), we use the average certifiable radius (ACR) as an aggregate metric, i.e. \u2211N\u22121 n=1 \u03f5n \u00b7 (\u03be(\u03f5n)\u2212 \u03be(\u03f5n+1) with budgets \u03f51 \u2264 \u03f52 \u00b7 \u00b7 \u00b7 \u2264 \u03f5N and \u03f51 = 0, \u03be(\u03f5N ) = 0.\nEvaluation procedure. We assess the accuracy-robustness tradeoff of each method by computing accuracy / mIOU and average certifiable radius for a wide range of smoothing distribution parameters. We then eliminate all points that are Pareto-dominated, i.e. for which there exist diffent parameter values that yield both higher accuracy / mIOU and ACR. Finally, we assess to what extend localized smoothing dominates the baselines, i.e. whether it can be parameterized to achieve strictly better accuracy-robustness tradeoffs. Importantly, note that the accuracy-robustness plots in Figs. 2, 4 and 5 are not to be read like line graphs! Even if there is a combination of baseline parameters and localized smoothing parameters that offer similar accuracy and robustness (i.e. two points are close in the plot), there may be other localized smoothing parameters that offer significantly higher accuracy and/or robustness (i.e. points to the top right) than any baseline parameters.\nRandomized smoothing parameters. In practice, randomized smoothing uses Monte Carlo sampling to compute probabilistic certificates that hold with probability (1 \u2212 \u03b1) (see \u00a7 G). We set \u03b1 = 0.01 for all experiments. For increased accuracy and robustness, we train both the baselines and our method with the same isotropic noise, thus favouring the baseline (more details in \u00a7 C)." |
|
}, |
|
{ |
|
"heading": "7.1 IMAGE SEGMENTATION ON PASCAL-VOC", |
|
"text": "Dataset and model. We evaluate our certificate for l2 perturbations on 50 images from the PascalVOC (Everingham et al., 2010) 2012 segmentation validation set. Training is performed on 10582 training samples extracted from SBD, also known as \u201dPascal trainaug\u201d (Hariharan et al., 2011). To increase batch sizes and thus allow the thorough investigation of different smoothing parameters, all images are downscaled to 50% of their original size, similar to (Fischer et al., 2021). Our base model is a U-Net segmentation model (Ronneberger et al., 2015) with a ResNet-18 backbone. For isotropic randomized smoothing, we use Gaussian noise N (0, \u03c3iso) with different \u03c3iso \u2208 {0.01, 0.02, . . . , 0.5}. We found all \u03c3iso > 0.39 to be Pareto-dominated. To perform localized randomized smoothing, we choose parameters \u03c3min, \u03c3max \u2208 [0.01, 0.5] \u00d7 [0.01, 5.0] and partition all images into regular grids of size 4\u00d7 6 (similar to Fig. 1). To smooth outputs in grid cell\n4I.e., add up confusion matrices over the entire dataset, compute per-class IOUs and average over all classes.\n(i, j), we sample noise for grid cell (k, l) from N (0, \u03c3\u2032 \u00b7 1), with \u03c3\u2032 \u2208 [\u03c3min, \u03c3max] chosen proportional to the distance of (i, j) and (k, l) (more details in \u00a7 C.2). For our method and the baselines, we take 153600 samples per image (i.e. 6400 per grid cell in localized smoothing).\nAccuracy-robustness tradeoff. Fig. 2 shows pairs of mIOU and average certifiable radius achieved by the isotropic smoothing baselines and the LP-based localized smoothing certificate. Localized smoothing dominates both CenterSmooth and SegCertify* almost everywhere. It offers a better accuracy-robustness tradeoff for models with mIOUs in [19.44%, 55.39%] \u2013 the highest achieved by the baselines being 55.96%. Recall that Fig. 2 is not to be read like a line graph! Even if the vertical distance between two methods is small, one may significantly outperform the other. For example, \u03c3iso = 0.2, with an mIOU of 39.11% and an ACR of 0.33 (highlighted with a bold cross) is dominated by (\u03c3min, \u03c3max) = (0.15, 0.25) (highlighted with a large circle), which has a larger ACR of 0.34 and a mIOU that is a whole 5.7 p.p. higher.\nBenefit of linear programming certificates. Fig. 3 demonstrates how the linear program derived in \u00a7 4.1 enables this improved tradeoff. We compare SegCertify* with \u03c3iso = 0.3 to localized smoothing with (\u03c3min, \u03c3max) = (0.25, 0.7). Na\u0131\u0308vely combining the localized smoothing base certificates (dashed line) is not sufficient for outperforming the baseline, as they cannot certify robustness beyond \u03f5 = 0.7. However, leveraging locality by solving the collective LP (solid blue line) extends the maximum certifiable radius to \u03f5 = 1.225 and guarantees higher certified accuracy for all \u03f5.\nComputational cost. The added computational cost for this improved tradeoff is small. Averaged over all images and adverarial budgets \u03f5, solving each LP only took 0.68 s, which is negligible compared to the average 460 s needed for obtaining the Monte Carlo samples for one image (which is necessary for both the baselines and our method).5" |
|
}, |
|
{ |
|
"heading": "7.2 IMAGE SEGMENTATION ON CITYSCAPES", |
|
"text": "Dataset and model. Next, we want to specifically evaluate a model where the idea of soft locality is challenged. We apply our approach to DeepLabv3 (Chen et al., 2017) where the dilated convolutions increase the receptive field size significantly on each layer. We evaluate the model on 50 images from the Cityscapes (Cordts et al., 2016) validation set. To limit the number of LP variables despite the increased image resolution, we quantize the base certificate parameters \u03b7(n) into 2048 bins (see \u00a7 E.2).6 All other parts of the experimental setup are identical to \u00a7 7.1.\nFig. 4 shows that, while our method outperforms CenterSmooth, most locally smoothed models are Pareto-dominated by SegCertify*. Localized smoothing offers significantly higher average certifiable radii for models with mIOU \u2264 0.11, but they are arguably not accurate enough to have any practical utility. This result shows that the soft locality of the base model is important. Note that we deliberately used a method which does not fulfill the locality assumption to obtain this negative results. Further note that we can always use \u03c3min = \u03c3max, to provide the same guarantees as SegCertify* (see Appendix H). We investigate these results, in particular the effect of having to use signifanctly fewer Monte Carlo samples per prediction, in more detail in \u00a7 A." |
|
}, |
|
{ |
|
"heading": "7.3 NODE CLASSIFICATION ON CITESEER", |
|
"text": "Dataset and model. Finally, we turn to a class of models that is often explicitly designed with locality in mind: Graph neural networks. We take APPNP (Klicpera et al., 2019), which aggregates per-node predictions from the entire graph based on personalized pagerank scores, and apply it to the Citeseer (Sen et al., 2008) dataset. To certify its robustness, we perform randomized smoothing with sparsity-aware noise S (x, \u03b8+, \u03b8\u2212), where \u03b8+ and \u03b8\u2212 control the probability of randomly\n5The parameters w and \u03b7(n) can be computed within milliseconds via a small number of vector operations. 6Computing time via the LP is 2.78s compared to 1204s needed for Monte Carlo sampling\nadding or deleting node attributes, respectively (more details in \u00a7 F.3.2). As a baseline we apply the tight certificate SparseSmooth of Bojchevski et al. (2020) to distributions S ( x, 0.01, \u03b8\u2212iso ) with \u03b8\u2212iso \u2208 {0.1, 0.15, . . . , 0.95}. The fixed, small addition probability 0.01 is meant to preserve the sparsity of the graph\u2019s attribute matrix and was used in most experiments in (Bojchevski et al., 2020). For localized smoothing, we partition the graph into 5 clusters and define a minimum deletion probability \u03b8\u2212min \u2208 {0.1, 0.15, . . . , 0.95}. We then sample each cluster\u2019s attributes from S (x, 0.01, \u03b8\u2032\u2212) with \u03b8\u2032\u2212 \u2208 [ \u03b8\u2212min, 0.95 ] chosen based on cluster affinity. To compute the base certificates, we use the variance-constrained certificate from \u00a7 F.3.2. In all cases, we take 5 \u00b7 105 samples (i.e. 105 per cluster for localized smoothing). Further discussions, as well as experiments on different models and datasets can be found in \u00a7 B.\nAccuracy-robustness tradeoff. Fig. 5 shows the accuracy and ACR pairs achieved by the na\u0131\u0308ve isotropic smoothing certificate and the LP-based certificate for localized smoothing. Both for adversarial deletions and additions, our method offers a better accuracy-robustness tradeoff than the baseline, Pareto-dominating it entirely. The difference is particularly large for attribute deletions, where localized smoothing can certify ACRs that are more than three times as large. Surprisingly, it even leads to an increase in accuracy compared to all isotropically smoothed models, in some cases by more than 7 p.p.. The phenomenon that increasing the probability of attribute perturbations increases accuracy to some extend has already been observed by (Bojchevski et al., 2020) (see Appendix K and Figure 11 in their paper), who pointed out its similarity to dropout regularizaton on GNN inputs. We posit that localized smoothing allows us to benefit from this test-time regularization, while simultaneously preserving the majority of attributes in the most important, nearby nodes. One might also suspect that this is due to the fact that we use the variance-constrained certificate for localized smoothing, but not for the baseline. We disprove this in \u00a7 B.1.\nComputational cost. The average time needed for solving the LP for our much stronger collective certificate (10.9 s) was small, compared to the average time for the Monte Carlo sampling required by both our method and the baselines (1034 s)." |
|
}, |
|
{ |
|
"heading": "8 CONCLUSION", |
|
"text": "In this work, we have proposed the first collective robustness certificate for softly local multi-output classifiers. It is based on localized randomized smoothing, i.e. smoothing different outputs using different anisotropic smoothing distributions matching the model\u2019s locality. We have shown how per-output certificates obtained via localized smoothing can be combined into a strong collective robustness certificate via (mixed-integer) linear programming. Experiments on image segmentation and node classification tasks demonstrate that localized smoothing can offer a better accuracyrobustness tradeoff than existing collective certificates reyling on isotropic smoothing. Our results showcase that locality is linked to robustness, which suggests the future research direction of building more effective local models to robustly solve multi-output tasks." |
|
}, |
|
{ |
|
"heading": "9 REPRODUCIBILITY STATEMENT", |
|
"text": "We prove all theoretic results that were not already derived in the main text in Appendices D to G. To ensure reproducibility of the experimental results we provide detailed descriptions of the evaluation process with the respective parameters in \u00a7 C. Code will be made available to reviewers via an anonymous link posted on OpenReview, as suggested by the guidelines." |
|
}, |
|
{ |
|
"heading": "10 ETHICS STATEMENT", |
|
"text": "In this paper, we propose a method to increase the robustness of machine learning models against adversarial perturbations and to certify their robustness. We see this as an important step towards general usage of models in practice, as many existing methods are brittle to crafted attacks. Through the proposed method, we hope to contribute to the safe usage of machine learning. However, robust models also have to be seen with caution. As they are harder to fool, harmful purposes like mass surveillance are harder to avoid. We believe that it is still necessary to further research robustness of machine learning models as the positive effects can outweigh the negatives, but it is necessary to discuss the ethical implications of the usage in any specific application area." |
|
}, |
|
{ |
|
"heading": "A Further Discussion of Cityscapes Results 16", |
|
"text": "" |
|
}, |
|
{ |
|
"heading": "B Additional Experiments on Node Classification 18", |
|
"text": "B.1 Comparison to the Na\u0131\u0308ve Variance-constrained Isotropic Smoothing Certificate . . 18\nB.2 Node Classification using Graph Convolutional Networks . . . . . . . . . . . . . . 18\nB.3 Node classification on Cora-ML . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\nB.4 Lower Certifiable Robustness to Additions . . . . . . . . . . . . . . . . . . . . . . 20\nB.5 Benefit of Linear Programming Certificates . . . . . . . . . . . . . . . . . . . . . 21" |
|
}, |
|
{ |
|
"heading": "C Detailed Experimental Setup 23", |
|
"text": "C.1 Certificate Strength Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\nC.2 Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\nC.3 Node Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24" |
|
}, |
|
{ |
|
"heading": "D Proof of Theorem 4.2 26", |
|
"text": "E Improving Efficiency 27\nE.1 Sharing Smoothing Distributions Among Outputs . . . . . . . . . . . . . . . . . . 27\nE.2 Quantizing Certificate Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 27\nE.3 Sharing Noise Levels Among Inputs . . . . . . . . . . . . . . . . . . . . . . . . . 28\nE.4 Linear Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29" |
|
}, |
|
{ |
|
"heading": "F Base Certificates 30", |
|
"text": "F.1 Gaussian Smoothing for l2 Perturbations of Continuous Data . . . . . . . . . . . . 30\nF.2 Uniform Smoothing for l1 Perturbations of Continuous Data . . . . . . . . . . . . 31\nF.3 Variance-constrained Certification . . . . . . . . . . . . . . . . . . . . . . . . . . 31" |
|
}, |
|
{ |
|
"heading": "G Monte Carlo Randomized Smoothing 39", |
|
"text": "G.1 Monte Carlo Base Certificates for Continuous Data . . . . . . . . . . . . . . . . . 39\nG.2 Monte Carlo Variance-constrained Certification . . . . . . . . . . . . . . . . . . . 40\nG.3 Monte Carlo Center Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41\nG.4 Multiple Comparisons Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42" |
|
}, |
|
{ |
|
"heading": "H Comparison to the Collective Certificate of Fischer et al. (2021) 44", |
|
"text": "" |
|
}, |
|
{ |
|
"heading": "I Comparison to the Collective Certificate of Schuchardt et al. (2021) 45", |
|
"text": "I.1 The Collective Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45\nI.2 Proof of Subsumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46" |
|
}, |
|
{ |
|
"heading": "A FURTHER DISCUSSION OF CITYSCAPES RESULTS", |
|
"text": "As discussed in \u00a7 7.2, there are multiple factors that may contribute to localized randomized smoothing being unable to outperform SegCertify* (Fischer et al., 2021) on the Cityscapes segmentation dataset.\nMonte Carlo sampling with increased image resolution. Computing the base certificate for prediction yn = fn(x) requires computing a probabilistic lower bound on qn,yn = Pr\nz\u223c\u03a8(n)x [gn(z) = y] that holds with a probability \u03b1n. Because we use Bonferroni correction to ensure that all bounds simultaneously hold with probability \u03b1 = 0.01, we set \u03b1n = \u03b1/Dout. We bound the qn,yn using Clopper-Pearson lower confidence bounds CP (Nyn , N, \u03b1n), where N is the number of samples and Nyn is the number of samples classified as yn. For localized smoothing, we use N/24 samples per grid-cell with N = 153600. This drastically reduces the maximum adversarial budget \u03f5 for which the base certificates can certify robustness. On Pascal-VOC (image resolution 166\u00d7 250), we have\n\u03c3min \u00b7 \u03a6\u22121 (CP (N,N,\u03b1/(166 \u2217 250))) \u03c3min \u00b7 \u03a6\u22121 (CP (N,N/24, \u03b1/(166 \u2217 250))) \u2243 1.318,\ni.e. the reduced number of samples reduces the maximum certifiable radius by more than 30%. On Cityscapes (image resolution 512\u00d7 1024), we have\n\u03c3min \u00b7 \u03a6\u22121 (CP (N,N,\u03b1/(512 \u2217 1024))) \u03c3min \u00b7 \u03a6\u22121 (CP (N,N/24, \u03b1/(512 \u2217 1024))) \u2243 1.325,\ni.e. the higher resolution only amplifies the effect of using fewer samples by c.a. 0.7 p.p.. This alone is hardly enough to explain the vast difference between our results on Pascal-VOC and Cityscapes.\nLinear relaxation with increased image resolution. To more efficiently solve the linear program underlying our collective certificate (see Theorem 4.2), we linearly relax the binary indicator vector t in objective function \u2211 n tn, i.e. we allow t \u2208 [0, 1]Dout . Each element tn contributes to the relaxation gap between the original mixed-integer linear program and the relaxed linear program. Increasing the number of pixels Dout may widen this gap. However, on Cityscapes we actually reduce the number of variables tn by quantizing the base certificate parameters \u03b7(n) in each image grid cell into 2048 quantization bins (see explanation in \u00a7 E.2). This leaves us with 24\u00b72048 = 49152 linearly relaxed variables, compared to 166 \u2217 250 = 41500 on Pascal-VOC, which is much smaller than the number of pixels 512 \u2217 1024 = 524288. The quantization error for each \u03b7(n) is also negligible:\n\u03b7max 2048\n\u2264 \u03a6 \u22121 (CP (N,N/24, \u03b1/(512 \u2217 1024)))\n2048 \u2264 0.003\nJust like with Monte Carlo sampling, the increase in resolution alone does not appear sufficient for explaining the stark difference between our results on the two datasets.\nDifferent or reduced model locality. In \u00a7 7.2, we concluded that the DeepLabv3 model was either not sufficiently local for localized smoothing to beneficial, or that our choice of smoothing distribution was inappropriate. Here, we further investigate this claim by determining whether localized smoothing was not beneficial at all, or whether it was just not beneficial enough to overcome the reduction in certificate strength caused by the previously discussed secondary factors. To eliminate the Monte Carlo sampling problem, we repeat our experiment using 153600 samples per cell, i.e. we can compute the base certificates using as many samples as the isotropic smoothing baselines. Ideally, we would also want to eliminate the integrality gap by exactly solving the mixed-integer linear program, but given the large number of binary variables tn this is not feasible. Fig. 6 shows that increasing the number of samples closes the gap between localized smoothing and SegCertify. Still, localized smoothing does not meaningfully improve upon isotropic smoothing, safe for models with mIOU \u2264 0.21, for which it can certify significantly higher levels of robustness. This further supports our claim that the negative results on Cityscapes should be attributed to the DeepLabv3 model not matching our locality assumption." |
|
}, |
|
{ |
|
"heading": "B ADDITIONAL EXPERIMENTS ON NODE CLASSIFICATION", |
|
"text": "In the following, we perform additional experiments on graph neural networks for node classification, including a different model and an additional dataset. Unless otherwise stated, all details of the experimental setup are identical to Fig. 4. In particular, we use sparsity-aware smoothing distribution S (x, 0.01, \u03b8\u2212), where probability of deleting bits \u03b8\u2212 is either constant across the entire graph (for the isotropic randomized smoothing baseline) or adjusted per output and cluster based on cluster affinity (for localized randomized smoothing)." |
|
}, |
|
{ |
|
"heading": "B.1 COMPARISON TO THE NAI\u0308VE VARIANCE-CONSTRAINED ISOTROPIC SMOOTHING CERTIFICATE", |
|
"text": "In Fig. 5 of \u00a7 7.3, we observed that locally smoothed models surprisingly did not only achieve up to three times higher average certifiable radii, but simultaneously had higher accuracy than any of the isotropically smoothed models. One potential explanation is that we used variance-constrained certification (see Theorem 5.1) (i.e. smoothing the models\u2019 softmax scores instead of their predicted labels) for localized smoothing, but not for the isotropic smoothing baseline. This might result in two substantially different models. To investigate this, we repeat the experiment from Fig. 5a, using variance-constrained certification for both localized smoothing and the isotropic smoothing baseline. Fig. 7 shows that, no matter which smoothing paradigm we use for our isotropic smoothing baseline, there is a c.a. 7 p.p. difference in accuracy between the most accurate isotropically smoothed model and the most accurate locally smoothed model.\nInterestingly, even variance-constrained smoothing with isotropic noise (green crosses in Fig. 7b) is sufficient for outperforming the isotropic smoothing certificate of Bojchevski et al. (2020) (orange stars in Fig. 7a). This showcases that variance-constrained certification does not only present a very efficient, but also a very effective way of certifying robustness on discrete data (even when entirely ignoring the collective robustness aspect)." |
|
}, |
|
{ |
|
"heading": "B.2 NODE CLASSIFICATION USING GRAPH CONVOLUTIONAL NETWORKS", |
|
"text": "So far, we have only used APPNP models as our base classifier. Now, we repeat our experiments using 6-layer Graph Convolutional Networks (GCN) (Kipf & Welling, 2017). In each layer, GCNs first apply a linear layer to each node\u2019s latent vector and then average over each node\u2019s 1-hop neighborhood. Thus, a 6-layer GCN classifies each node using attributes from all nodes in its 6-hop neighborhood, which covers most or all of the Citeseer graph. Aside from using GCN instead of APPNP as the base model, we leave the experimental setup from \u00a7 7.3 unchanged. Note that GCNs\nare typically used with fewer layers. However, these shallow models are strictly local and it has already been established that the certificate Schuchardt et al. (2021) \u2013 which is subsumed by our certificate (see \u00a7 I.2) \u2013 can provide very strong robustness guarantees for them. We therefore increase the number of layers to obtain a model that is not strictly local.\nFig. 8 shows the results for both robustness to deletions and robustness to additions. Similar to APPNP, some locally smoothed models have an up to 4 p.p. higher accuracy than the most accurate isotropically smoothed model. When considering robustness to deletions, the locally smoothed models Pareto-dominate all of the isotropically smoothed models, i.e. offer better accuracy-robustness tradeoffs. Some can guarantee average certifiable radii that are at least 50% larger than those of the baseline. When considering robustness to additions however, some of the isotropically smoothed models have a higher certifiably robustness.\nWe see two potential causes for our method\u2019s lower certifiable robustness to additions: The first potential cause is that the GCN may be less local than APPNP, similar to how DeepLabv3 appears less local than U-Net (see \u00a7 7.2), or that it has a different form of locality that does not match our clustering-based localized smoothing distributions. This appears plausible, as GCN averages uniformly over each neighborhood, whereas APPNP aggregates predictions based on pagerank scores. APPNP may thus primarily attend to specific, densely connected nodes, making it more local than GCN. The second potential cause is that the variance-constrained certificate we use as our base certificate may be less effective when certifying robustness to adversarial additions by using a very small addition probablity like \u03b8+ = 0.01. Afterall, we have also seen in our experiments with APPNP in \u00a7 7.3 that the gap in average certifiable radii between localized and isotropic smoothing was significantly smaller when considering additions. We investigate this second potential cause in more detail in \u00a7 B.4." |
|
}, |
|
{ |
|
"heading": "B.3 NODE CLASSIFICATION ON CORA-ML", |
|
"text": "Next, we repeat our experiments with APPNP on the Cora-ML (McCallum et al., 2000; Bojchevski & Gu\u0308nnemann, 2018) node classification dataset, keeping all other parameters fixed. The results are shown in Fig. 9. Unlike on Citeseer, the locally smoothed models have a slightly reduced accuracy compared to the isotropically smoothed models. This can either be attributed to one smoothing approach having a more desirable regularizing effect on the neural network, or the fact that we smooth softmax scores instead of predicted labels when constructing the locally smoothed models. Nevertheless, when considering adversarial deletions, localized smoothing makes it possible to achieve average certifiable radii that are at least 50% larger than any of the isotropically smoothed models\u2019 \u2013 at the cost of slightly reduced accuracy 8.6%. Or, for another point of the pareto front, we in-\ncrease the certificate by 20% while reducing the accuracy by 2.8 percentage points. As before, the certificates for attribute additions are significantly weaker." |
|
}, |
|
{ |
|
"heading": "B.4 LOWER CERTIFIABLE ROBUSTNESS TO ADDITIONS", |
|
"text": "While our certificates for adversarial deletions have compared favorably to the isotropic smoothing baseline in all previous experiments, our certificates for adversarial additions were comparatively weaker on Cora-ML and when using GCNs as base models. In the following, we investigate to what extend this can be attributed to our use of variance-constrained certification for our base certificates.\nFig. 10a shows both our linear programming collective certificate and the na\u0131\u0308ve isotropic smoothing certificate based on (Bojchevski et al., 2020) for GCNs on Citeseer under adversarial additions. In Fig. 10b, we plot not only the LP-based certificates, but also our variance-constrained base certificates (drawn as green crosses). Comparing both figures shows that our base certificate\u2019s average certifiable radii are at least 50% smaller than the largest ACR achieved by (Bojchevski et al., 2020)\nin Fig. 10a. While our linear program significantly improves upon them, it is not sufficient to overcome this significant gap. This result is in stark contrast to our results for attribute deletions \u00a7 B.1, where the variance-constrained base certificates alone were enough to significantly outperform the certificate of (Bojchevski et al., 2020).\nNow that we have established that the variance-constrained base certificates appear significantly weaker for additions, we can analyze why. For this, recall that our base certificates are parameterized by a weight vector w (see Definition 4.1), with smaller values corresponding to higher robustness \u2013 or two weight vectors w+, w\u2212 quantifying robustness to adversarial additions and deletions, respectively (see \u00a7 F.3.2). Using our results from \u00a7 F.3.2, we can draw the weights w+ resulting from smoothing distribution S (x, 0.01, \u03b8\u2212) as a function of \u03b8\u2212. Fig. 11a shows that \u03b8\u2212 has to be brought very close to 1 in order to guarantee high robustness to deletions, effectively deleting almost all attributes in the graph. Alternatively, one can also increase the addition probability \u03b8+ to perhaps 10% or 20%. But this would utterly destroy the sparsity of the graph\u2019s attribute matrix. We can conclude that, while variance-constrained certification can in principle provide strong certificates for attribute deletions, it might be a worse choice than the method of Bojchevski et al. (2020) for very sparse datasets that force the use of very low addition probabilities \u03b8+." |
|
}, |
|
{ |
|
"heading": "B.5 BENEFIT OF LINEAR PROGRAMMING CERTIFICATES", |
|
"text": "As we did for our experiments on image segmentation (see Fig. 3), we can inspect the certified accuracy curves of specific smoothed models in more detail to gain a better understanding of how the collective linear programming certificate enables larger average certifiable radii. We use the same experimental setup as in \u00a7 7.3, i.e. APPNP on Citeseer, and certify robustness to deletions. We compare the certifiably most robust isotropically smoothed model (\u03b8\u2212iso = 0.8, ACR = 5.67 to the locally smoothed model with \u03b8\u2212min = 0.75, \u03b8 + max = 0.95. For the locally smoothed models, we compute both LP-based collective certificate, as well as the na\u0131\u0308ve collective certificate.\nFig. 12 shows that even na\u0131\u0308vely combining the localized smoothing base certificates obtained via variance-constrained certification (dashed blue line) is sufficient for outperforming the na\u0131\u0308ve isotropic smoothing certificate. This speaks to its effectiveness as a certificate against adversarial deletions. Combining the base certificates via linear programming (solid blue line) significantly enlarges this gap, leading to even larger maximum and average certifiable radii." |
|
}, |
|
{ |
|
"heading": "C DETAILED EXPERIMENTAL SETUP", |
|
"text": "In the following, we first explain the metrics we use for measuring the strength of certificates, and how they can be applied to the different types of randomized smoothing certificates used in our experiments. We then discuss the specific parameters and hyperparameters for our semantic segmentation and node classification experiments." |
|
}, |
|
{ |
|
"heading": "C.1 CERTIFICATE STRENGTH METRICS", |
|
"text": "We use two metrics for measuring certificate strength: For specific adversarial budgets \u03f5, we compute the certified accuracy \u03be(\u03f5) (i.e. the percentage of correct and certifiably robust predictions). As an aggregate metric, we compute the average certifiable radius, i.e. the lower Riemann integral of \u03be(\u03f5) evaluated at \u03f51, . . . , \u03f5N with \u03f51 = 0 and \u03be(\u03f5N ) = 0. For our experiments on image segmentation, we use 161 equidistant points in [0, 4]. For our experiments on node classification, where we certify robustness to a discrete number of perturbations, we use \u03f5n = n, i.e. natural numbers. In all experiments, we perform Monte Carlo randomized smoothing (see \u00a7 G). Therefore, we may have to abstain from making predictions. Abstentions are counted as non-robust and incorrect. In the case of center smoothing, either all or no predictions abstain (this is inherent to the method. In our experiments, center smoothing never abstained)." |
|
}, |
|
{ |
|
"heading": "C.1.1 COMPUTING CERTIFIED ACCURACY", |
|
"text": "The three different types of collective certificate considered in our experiments each require a different procedure for computing the certified accuracy. In the following, let Z = {d \u2208 {1, . . . , Dout} | fn(x) = y\u0302n} be the indices of correct predictions, given an input x. Na\u0131\u0308ve collective certificate. The na\u0131\u0308ve collective certificate certifies each prediction independently. Let H(n) be the set of perturbed inputs yn is certifiably robust to (see Definition 3.1). Let Bx be the collective perturbation model. Then L = { d \u2208 {1, . . . , Dout} | Bx \u2286 H(n) } is the set of all certifiably robust predictions. The certified accuracy can be computed as |L\u2229Z|Dout .\nCenter smoothing Center smoothing used for collective robustness certification does not determine which predictions are robust, but only the number of robust predictions. We therefore have to make the worst-case assumption that the correct predictions are the first to be changed by the adversary. Let l be the number of certifiably robust predictions. The certified accuracy can then be computed as max(0,|Z|\u2212(Dout\u2212l))Dout .\nCollective certificate. Let l(T) be the optimal value of our collective certificate for the set of targeted nodes T. Then the certified accuracy can be computed via l(T)Dout with T = Z." |
|
}, |
|
{ |
|
"heading": "C.2 SEMANTIC SEGMENTATION", |
|
"text": "Here, we provide all parameters of our experiments on image segmentation.\nModels. As base models for the semantic segmentation tasks, we use U-Net (Ronneberger et al., 2015) and DeepLabv3 (Chen et al., 2017) segmentation heads with a ResNet-18 (He et al., 2016) backbone, as implemented by the Pytorch Segmentation Models library (version 0.13) (Yakubovskiy, 2020). We use the library\u2019s default parameters. In particular, the inputs to the UNet segmentation head are the features of the ResNet model after the first convolutional layer and after each ResNet block (i.e. after every fourth of the subsequent layers). The U-Net segmentation head uses (starting with the original resolution) 16, 32, 64, 128 and 256 convolutional filters for processing the features at the different scales. For the DeepLabv3 segmentation head, we use all default parameters from Chen et al. (2017) and an output stride of 16. To avoid dimension mismatches in the segmentation head, all input images are zero-padded to a height and width that is the next multiple of 32.\nData and preprocessing. We evaluate our certificates on the Pascal-VOC 2012 and Cityscapes segmentation validation set. We do not use the test set, because evaluating metrics like the certified accuracy requires access to the ground-truth labels. For training the U-Net models on Pascal, we\nuse the 10582 Pascal segmentation masks extracted from the SBD dataset (Hariharan et al., 2011) (referred to as \u201dPascal trainaug\u201d or \u201dPascal augmented training set\u201d in other papers). SBD uses a different data split than the official Pascal-VOC 2012 segmentation dataset. We avoid data leakage by removing all training images that appear in the validation set. For training the DeepLabv3 model on Cityscapes, we use the default training set. We downscale both the training and the validation images and ground-truth masks to 50% of their original height and width, so that we can use larger batch sizes and thus use our compute time to more thoroughly evaluate a larger range of different smoothing distributions. The segmentation masks are downscaled using nearest-neighbor interpolation, the images are downscaled using the INTER AREA operation implemented in OpenCV (Bradski, 2000).\nTraining and data augmentation. We initialize our model weights using the weights provided by the Pytorch Segmentation Models library, which were obtained by pre-training on ImageNet. We train our models for 256 epochs, using Dice loss and Adam(lr = 0.001, \u03b21 = 0.9, \u03b22 = 0.999, \u03f5 = 10\u22128,weight decay = 0). We use a batch size of 64 for Pascal-VOC and a batch size of 32 for Cityscapes. Every 8 epochs, we compute the mean IOU on the validation set. After training, we use the model that achieved the highest validation mean IOU. We apply the following train-time augmentations: Each image is randomly shifted by up to 10% of its height and width, scaled by a factor from [0.5, 2.0] and rotated between \u221210 and 10 degrees using the ShiftScaleRotate augmentation implemented by the Albumentations library (version 0.5.2) (Buslaev et al., 2020). The images are than cropped to a fixed size of 128 \u00d7 160 (for Pascal-VOC) or 384 \u00d7 384 (for Cityscapes). Where necessary, the images are padded with zeros. Padded parts of the segmentation mask are ignored by the loss function. After these operations, each input is randomly perturbed using a Gaussian distribution with a fixed standard deviation \u03c3 \u2208 {0, 0.01, . . . , 0.5}, i.e. we train 51 different models trained on different isotropic smoothing distributions. All samples are clipped to [0, 1] to retain valid RGB-values.\nCertification. For Pascal-VOC, we evaluate all certificates on the first 50 images from the validation set that \u2013 after downscaling \u2013 have a resolution of 166 \u00d7 250. For Cityscapes, we use every tenth image from the validation set. For all certificates, we use Monte Carlo randomized smoothing (see discussion in \u00a7 G). We use 12288 samples for making smoothed predictions and 153600 samples for certification. We use the significance parameter \u03b1 to 0.01, i.e. all certificates hold with probability 0.99. For the center smoothing baseline, we use the default parameters suggested by the authors (\u2206 = 0.05, \u03b2 = 2, \u03b11 = \u03b12). For the na\u0131\u0308ve isotropic randomized smoothing baseline, we use Holm correction to account for the multiple comparisons problem, which yields strictly better results than Bonferroni correction (see \u00a7 G.4). For our localized smoothing certificates, we use Bonferroni correction. For our localized smoothing distribution, we partition the input image into a regular grid of size 4\u00d7 6 and define minimum standard deviation \u03c3min and maximum standard deviation \u03c3max. Let J(k,l) be the set of all pixel coordinates in grid cell (k, l). To smooth outputs in grid cell (i, j), we use a smoothing distribution N (0,diag(\u03c3)) with \u2200k \u2208 {1, . . . , 4}, l \u2208 {1, . . . , 6}, d \u2208 J(k,l),\n\u03c3d = \u03c3min + (\u03c3max \u2212 \u03c3min) \u00b7 max (|i\u2212 k|, |l \u2212 j|)\n6 , (7)\ni.e. we linearly interpolate between \u03c3min and \u03c3max based on the l\u221e distance of grid cells (i, j) and (k, l). All results are reported for the relaxed linear programming formulation of our collective certificate (see \u00a7 E.4). For each grid cell, we use 124 = 1 4\u00b76 of the samples, which corresponds to 512 samples for prediction and 6400 samples for certification. The collective linear program is solved using MOSEK (version 9.2.46) (MOSEK ApS, 2019) through the CVXPY interface (version 1.1.13)" |
|
}, |
|
{ |
|
"heading": "C.2.1 HARDWARE", |
|
"text": "All experiments on Pascal-VOC were performed using a Xeon E5-2630 v4 CPU @ 2.20GHz, an NVIDA GTX 1080TI GPU and 128GB of RAM. All experiments on Cityscapes were performed using an AMD EPYC 7543 CPU @ 2.80GHz, an NVIDA A100 GPU and 128GB of RAM." |
|
}, |
|
{ |
|
"heading": "C.3 NODE CLASSIFICATION", |
|
"text": "Here, we provide all parameters of our experiments on node classification.\nModel We test two different models: 2-layer APPNP (Klicpera et al., 2019) and 6-layer GCN (Kipf & Welling, 2017). For both models we use a hidden size of 64 and dropout with a probability of 0.5.\nFor the propagation step of APPNP we use 10 for the number of iterations and 0.15 as the teleport probability.\nData and preprocessing. We evaluate our approach on the Cora-ML and Citeseer node classification datasets. We perform standard preprocessing, i.e., remove self-loops, make the graph undirected and select the largest connected component. We use the same data split as in (Schuchardt et al., 2021), i.e. 20 nodes per class for the train and validation set.\nTraining and data augmentation All models are trained with a learning rate of 0.001 and weight decay of 0.001. The models we use for sparse smoothing are trained with the noise distribution that is also reported for certification. The localized smoothing models are trained on the their minimal noise level, i.e., not with localized noise but with only \u03b8+min and \u03b8 \u2212 min.\nCertification We evaluate our certificates on the validation nodes. For all certificates, we use Monte Carlo randomized smoothing (see discussion in \u00a7 G). We use 1000 samples for making smoothed predictions and 5 \u00b7 105 samples for certification. We use the significance parameter \u03b1 to 0.01, i.e. all certificates hold with probability 0.99. For the na\u0131\u0308ve isotropic randomized smoothing baseline, we use Holm correction to account for the multiple comparisons problem, which yields strictly better results than Bonferroni correction (see \u00a7 G.4). For our localized smoothing certificates, we use Bonferroni correction. To parameterize the localized smoothing distribution, we first perform Metis clustering (Karypis & Kumar, 1998) to partition the graph into 5 clusters. We create an affinity ranking by counting the number of edges which are connecting cluster i and j. Specifically, let C be the set of clusters given by the Metis clustering. Then we count the number of edges between all cluster pairs and denote it by Ni,j , i, j \u2208 C. If the number of edges of the pair (i, j) is higher than the number for all other pairs (k, j) \u2200j \u2208 C, i.e. Ni,j > Nk,j \u2200k \u2208 C, we can say that, due to the homophily assumption, cluster i is the most important one for cluster j. We create this ranking for all pairs and use it to select the noise parameter \u03b8\u2032\u2212 for smoothing the attributes of cluster j while classifying a node of cluster i out of the discrete steps of the linear interpolation between \u03b8min and \u03b8max based on its previously defined ranking between the clusters. An example would be, given 11 clusters, \u03b8min = 0.0, and \u03b8max = 1.0. If cluster j second most important cluster to i, then we would take the second value out of {0.0, 0.1, . . . , 1.0}. All results are reported for the relaxed linear programming formulation of our collective certificate (see \u00a7 E.4). For each cluster, we use 15 of the samples, which corresponds to 200 samples for prediction and 105 samples for certification. The collective linear program is solved using MOSEK (version 9.2.46) (MOSEK ApS, 2019) through the CVXPY interface (version 1.1.13) (Diamond & Boyd, 2016)." |
|
}, |
|
{ |
|
"heading": "C.3.1 HARDWARE AND RUNTIME", |
|
"text": "All experiments on image segmentation were performed using an AMD EPYC 7543 CPU @ 2.80GHz, an NVIDA A100 GPU and 32GB of RAM. In \u00a7 7.3 the reported time for solving a single instance of the collective linear program is much higher than for image segmentation, even though the graph datasets require fewer variables to model and we use more potent hardware. That is because we used a different, not as well vectorized formulation of the linear program in CVXPY." |
|
}, |
|
{ |
|
"heading": "D PROOF OF THEOREM 4.2", |
|
"text": "In the following, we prove Theorem 4.2, i.e. we derive the mixed-integer linear program that underlies our collective certificate and prove that it provides a valid bound on the number of simultaneously robust predictions. The derivation bears some semblance to that of (Schuchardt et al., 2021), in that both use standard techniques to model indicator functions using binary variables and that both convert optimization in input space to optimization in adversarial budget space. Nevertheless, both methods differ in how they encode and evaluate base certificates, ultimately leading to significantly different results (our method encodes each base certificate using only a single linear constraint and does not perform any masking operations). Theorem 4.2. Given locally smoothed model f , input x \u2208 X(Din), smoothed prediction yn = f(x) and base certificates H(1), . . . ,HDout complying with interface Eq. 2, the number of simultaneously robust predictions minx\u2032\u2208Bx\n\u2211 n\u2208T I [fn(x \u2032) = yn] is lower-bounded by\nmin b\u2208RDin+ ,t\u2208{0,1}Dout \u2211 n\u2208T tn (8)\ns.t. \u2200n : bTw(n) \u2265 (1\u2212 tn)\u03b7(n), sum{b} \u2264 \u03f5p. (9)\nProof. We begin by inserting the definition of our perturbation model Bx and the base certificates H(n) into Eq. 1.1:\nmin x\u2032\u2208Bx \u2211 n\u2208T I [fn(x \u2032) = yn] \u2265 min x\u2032\u2208Bx \u2211 n\u2208T I [ x\u2032 \u2208 H(n) ] (10)\n= min x\u2032\u2208XDin \u2211 n\u2208T I [ Din\u2211 d=1 w (n) d \u00b7 |x \u2032 d \u2212 xd|p < \u03b7(n) ] s.t. Din\u2211 d=1 |x\u2032d \u2212 xd|p \u2264 \u03f5p.\n(11)\nEvidently, input x\u2032 only affects the element-wise distances |x\u2032d \u2212 xd|p. Rather than optimizing x\u2032, we can directly optimize these distances, i.e. determine how much adversarial budget is allocated to each input dimension. For this, we define a vector of variables b \u2208 RDin+ (or b \u2208 {0, 1}Din for binary data). Replacing sums with inner products, we can restate Eq. 11 as\nmin b\u2208RDin+ \u2211 n\u2208T I [ bTw(n) < \u03b7(n) ] s.t. sum{b} \u2264 \u03f5p. (12)\nIn a final step, we replace the indicator functions in Eq. 12 with a vector of boolean variables t \u2208 {0, 1}Dout .\nmin b\u2208RDin+ ,t\u2208{0,1}Dout \u2211 n\u2208T tn (13)\ns.t. \u2200n : bTw(n) \u2265 (1\u2212 tn)\u03b7(n), sum{b} \u2264 \u03f5p. (14) The first constraint in Eq. 5 ensures that tn = 0 \u21d0\u21d2 I [ bTw(n) \u2265 \u03b7(n) ] . Therefore, the optimization problem in Eq. 13 and Eq. 5 is equivalent to Eq. 12, which by transitivity is a lower bound on minx\u2032\u2208Bx\n\u2211 n\u2208T I [fn(x \u2032) = yn].\nE IMPROVING EFFICIENCY\nIn this section, we discuss different modifications to our collective certificate that improve its sample efficiency and allow us fine-grained control over the size of the collective linear program. We further discuss a linear relaxation of our collective linear program. All of the modifications preserve the soundness of our collective certificate, i.e. we still obtain a provable bound on the number of predictions that can be simultaneously attacked by an adversary. To avoid constant case distinctions, we first present all results for real-valued data, i.e. X = R, before mentioning any additional precautions that may be needed when working with binary data." |
|
}, |
|
{ |
|
"heading": "E.1 SHARING SMOOTHING DISTRIBUTIONS AMONG OUTPUTS", |
|
"text": "In principle, our proposed certificate allows a different smoothing distribution \u03a8(n)x to be used per output gn of our base model. In practice, where we have to estimate properties of the smoothed classifier using Monte Carlo methods, this is problematic: Samples cannot be re-used, each of the many outputs requires its own round of sampling. We can increase the efficiency of our localized smoothing approach by partitioning our Dout outputs into Nout subsets that share the same smoothing distributions. When making smoothed predictions or computing base certificates, we can then reuse the same samples for all outputs within each subsets.\nMore formally, we partition our Dout output dimensions into sets K(1), . . . ,K(Nout) with\u22c3\u0307Nout i=1 K(i) = {1, . . . , Dout}. (15)\nWe then associate each set K(i) with a smoothing distribution \u03a8(i)x . For each base model output gn with n \u2208 K(i), we then use smoothing distribution \u03a8(i)x to construct the smoothed output fn, e.g. fn(x) = argmaxy\u2208Y Prz\u223c\u03a8(i)x\n[f(x+ z) = y] (note that for our variance-constrained certificate we smooth the softmax scores instead, see \u00a7 5)." |
|
}, |
|
{ |
|
"heading": "E.2 QUANTIZING CERTIFICATE PARAMETERS", |
|
"text": "Recall that our base certificates from \u00a7 5 are defined by a linear inequality: A prediction yn = fn(x) is robust to a perturbed input x\u2032 \u2208 XDin if \u2211D d=1 w (n) d \u00b7 |x\u2032d \u2212 xd| p < \u03b7(n), for some p \u2265 0. The weight vectorsw(n) \u2208 RDin only depend on the smoothing distributions. A side of effect of sharing the same smoothing \u03a8(i)x among all outputs from a set K(i), as discussed in the previous section, is that the outputs also share the same weight vector w(i) \u2208 RDin with \u2200n \u2208 K(i) : w(i) = w(n). Thus, for all smoothed outputs fn with n \u2208 K(i), the smoothed prediction yn is robust if\u2211D\nd=1 w (i) d \u00b7 |x\u2032d \u2212 xd| p < \u03b7(n).\nEvidently, the base certificates for outputs from a set K(i) only differ in their parameter \u03b7(n). Recall that in our collective linear program we use a vector of variables t \u2208 {0, 1}Dout to indicate which predictions are robust according to their base certificates (see Theorem 4.2). If there are two outputs fn and fm with \u03b7(n) = \u03b7(m), then fn and fm have the same base certificate and their robustness can be modelled by the same indicator variable. Conversely, for each set of outputs K(i), we only need one indicator variable per unique \u03b7(n). By quantizing the \u03b7(n) within each subset K(i) (for example by defining equally sized bins between minn\u2208K(i) \u03b7(n) and maxn\u2208K(i) \u03b7(n) ), we can ensure that there is always a fixed number Nbins of indicator variables per subset. This way, we can reduce the number of indicator variables from Dout to Nout \u00b7Nbins. To implement this idea, we define a matrix of thresholds E \u2208 RNout\u00d7Nbins with \u2200i : min {Ei,:} \u2264 minn\u2208K(i) ({ \u03b7(n) | n \u2208 K(i) }) . We then define a function \u03be : {1, . . . , Nout} \u00d7 R \u2192 R with\n\u03be(i, \u03b7) = max ({Ei,j | j \u2208 {1, . . . , Nbins \u2227 Ei,j < \u03b7}) (16)\nthat quantizes base certificate parameter \u03b7 from output subset K(i) by mapping it to the next smallest threshold in Ei,:. We can then bound the collective robustness of the targeted dimensions T of our\nprediction vector y = f(x) as follows:\nmin \u2211\ni\u2208{1,...,Nout} \u2211 j\u2208{1,...,Nbins} Ti,j \u2223\u2223\u2223{n \u2208 T \u2229K(i) \u2223\u2223\u2223\u03be (i, \u03b7(n)) = Ei,j }\u2223\u2223\u2223 (17) s.t. \u2200i, j : bTw(i) \u2265 (1\u2212 Ti,j)Ei,j , sum{b} \u2264 \u03f5p (18)\nb \u2208 RDin+ , T \u2208 {0, 1}Nout\u00d7Nbins . (19)\nConstraint Eq. 18 ensures that Ti,j is only set to 0 if bTw(i) \u2265 Ei,j , i.e. all predictions from subset K(i) whose base certificate parameter \u03b7(n) is quantized to Ei,j are no longer robust. When this is the case, the objective function decreases by the number of these predictions. For Nout = Dout, Nbins = 1 and En,1 = \u03b7(n), we recover our general certificate from Theorem 4.2. Note that, if the quantization maps any parameter \u03b7(n) to a smaller number, the base certificate H(n) becomes more restrictive, i.e. yn is considered robust to a smaller set of perturbed inputs. Thus, Eq. 17 is a lower bound on our general certificate from Theorem 4.2." |
|
}, |
|
{ |
|
"heading": "E.3 SHARING NOISE LEVELS AMONG INPUTS", |
|
"text": "Similar to how partitioning the output dimensions allows us to control the number of output variables t, partitioning the input dimensions and using the same noise level within each partition allows us to control the number of variables b that model the allocation of adversarial budget.\nAssume that we have partitioned our output dimensions into Nout subsets K(1), . . . ,K(Nout), with outputs in each subset sharing the same smoothing distribution \u03a8(i)x , as explained in \u00a7 E.1. Let us now define Nin input subsets J(1), . . . , J(Nin) with\u22c3\u0307Nin\nl=1 J(l) = {1, . . . , Dout}. (20)\nRecall that a prediction yn = fn(x) with n \u2208 K(i) is robust to a perturbed input x\u2032 \u2208 XDin if \u2211D\nd=1 w (i) d \u00b7 |x\u2032d \u2212 xd| p < \u03b7(n) and that the weight vectors w(i) only depend on the smooth-\ning distributions. Assume that we choose each smoothing distribution \u03a8(i)x such that \u2200l \u2208 {1, . . . , Nin},\u2200d, d\u2032 \u2208 J(l) : w(i)d = w (i) d\u2032 , i.e. all input dimensions within each set J(l) have the same weight. This can be achieved by choosing \u03a8(i)x so that all dimensions in each input subset Jl are smoothed with the noise level (note that we can still use a different smoothing distribution \u03a8(i)x for each set of outputs K(i)). For example, one could use a Gaussian distribution with covariance matrix \u03a3 = diag (\u03c3)2 with \u2200l \u2208 {1, . . . , Nin},\u2200d, d\u2032 \u2208 J(l) : \u03c3d = \u03c3d\u2032 . In this case, the evaluation of our base certificates can be simplified. Prediction yn = fn(x) with n \u2208 K(n) is robust to a perturbed input x\u2032 \u2208 XDin if\nDin\u2211 d=1 w (i) d \u00b7 |x \u2032 d \u2212 xd| p < \u03b7(n) (21)\n= Nin\u2211 l=1 u(i) \u00b7 \u2211 d\u2208J(l) |x\u2032d \u2212 xd| p < \u03b7(n), (22) with u \u2208 RNin+ and \u2200i \u2208 {1, . . . , Nout},\u2200l \u2208 {1, . . . , Nin},\u2200d \u2208 J(l) : uil = wid. That is, we can replace each weight vector w(i) that has one weight w(i)d per input dimension d with a smaller weight vector u(i) featuring one weight u(i)l per input subset J(l).\nFor our linear program, this means that we no longer need a budget vector b \u2208 RDin+ to model the element-wise distance |x\u2032d \u2212 xd|\np in each dimension d. Instead, we can use a smaller budget vector b \u2208 RNin+ to model the overall distance within each input subset J(l), i.e. b(l) = \u2211 d\u2208J(l) |x\u2032d \u2212 xd|\np. Combined with the quantization of certificate parameters from the previous section, our optimization\nproblem becomes min \u2211\ni\u2208{1,...,Nout} \u2211 j\u2208{1,...,Nbins} Ti,j \u2223\u2223\u2223{n \u2208 T \u2229K(i) \u2223\u2223\u2223\u03be (i, \u03b7(n)) = Ei,j }\u2223\u2223\u2223 (23) s.t. \u2200i, j : bTu(i) \u2265 (1\u2212 Ti,j)Ei,j , sum{b} \u2264 \u03f5p, (24)\nb \u2208 RNin+ , T \u2208 {0, 1}Nout\u00d7Nbins . (25)\nwith u \u2208 RNin and \u2200i \u2208 {1, . . . , Nout},\u2200l \u2208 {1, . . . , Nin},\u2200d \u2208 J : uil = wid. For Nout = Dout, Nin = Din, Nbins = 1 and En,1 = \u03b7(n), we recover our general certificate from Theorem 4.2.\nWhen certifying robustness for binary data, we impose different constraints on b. To model that the adversary can not flip more bits than are present within each subset, we use a budget vector b \u2208 NNin0 with \u2200l \u2208 {1, . . . , Nin} : bl \u2264 \u2223\u2223J(l)\u2223\u2223, instead of a continuous budget vector b \u2208 RNin+ ." |
|
}, |
|
{ |
|
"heading": "E.4 LINEAR RELAXATION", |
|
"text": "Combining the previous steps allows us to reduce the number of problem variables and linear constraints from Din + Dout and Dout + 1 to Nin + Nout \u00b7 Nbins and Nout \u00b7 Nbins + 1, respectively. Still, finding an optimal solution to the mixed-integer linear program may be too expensive. One can obtain a lower bound on the optimal value and thus a valid, albeit more pessimistic, robustness certificate by relaxing all discrete variables to be continuous.\nWhen using the general certificate from Theorem 4.2, the binary vector t \u2208 {0, 1}Dout can be relaxed to t \u2208 [0, 1]Dout . When using the certificate with quantized base certificate parameters from \u00a7 E.2 or \u00a7 E.3, the binary matrix T \u2208 [0, 1]Nout\u00d7Nbins can be relaxed to T \u2208 [0, 1]Nout\u00d7Nbins . Conceptually, this means that predictions can be partially certified, i.e. tn \u2208 (0, 1) or Ti,j \u2208 (0, 1). In particular, a prediction can be partially certified even if we know that is impossible to attack under the collective perturbation model Bx = { x\u2032 \u2208 XDin | ||x\u2032 \u2212 x||p \u2264 \u03f5 } . Just like Schuchardt et al. (2021), who encountered the same problem with their collective certificate, we circumvent this issue by first computing a set L \u2286 T of all targeted predictions in T that are guaranteed to always be robust under the collective perturbation model:\nL = { n \u2208 T \u2223\u2223\u2223\u2223\u2223 ( max x\u2208Bx D\u2211 d=1 w (n) d \u00b7 |x \u2032 d \u2212 xd| p ) < \u03b7(n) } (26)\n= { n \u2208 T \u2223\u2223\u2223max n { w(n) } \u00b7 \u03f5p < \u03b7(n) } . (27)\nThe equality follows from the fact that the most effective way of attacking a prediction is to allocate all adversarial budget to the least robust dimension, i.e. the dimension with the largest weight. Because we know that all predictions with indices in L are robust, we do not have to include them in the collective optimization problem and can instead compute\n|L|+ min x\u2032\u2208Bx \u2211 n\u2208T\\L I [ x\u2032 \u2208 H(n) ] . (28)\nThe r.h.s. optimization can be solved using the general collective certificate from Theorem 4.2 or any of the more efficient, modified certificates from previous sections.\nWhen using the general collective certificate from Theorem 4.2 with binary data, the budget variables b \u2208 {0, 1}Din can be relaxed to b \u2208 [0, 1]Din . When using the modified collective certificate from \u00a7 E.3, the budget variables with b \u2208 NNin0 can be relaxed to b \u2208 R Nin + . The additional\nconstraint \u2200l \u2208 {1, . . . , Nin} : bl \u2264 \u2223\u2223J(l)\u2223\u2223 can be kept in order to model that the adversary cannot flip (or partially flip) more bits than are present within each input subset J(l)." |
|
}, |
|
{ |
|
"heading": "F BASE CERTIFICATES", |
|
"text": "In the following, we show why the base certificates discussed in \u00a7 5 and summarized in Table 1 hold. In \u00a7 F.3.2 we further present a base certificate (and corresponding collective certificate) that can distinguish between adversarial addition and deletion of bits in binary data.\nF.1 GAUSSIAN SMOOTHING FOR l2 PERTURBATIONS OF CONTINUOUS DATA\nProposition F.1. Given an output gn : RDin \u2192 Y, let fn(x) = argmaxy\u2208Y Prz\u223cN (x,\u03a3) [gn(z) = y] be the corresponding smoothed output with \u03a3 = diag (\u03c3) 2 and \u03c3 \u2208 RDin+ . Given an input x \u2208 RDin and smoothed prediction yn = fn(x), let q = Prz\u223cN (x,\u03a3) [gn(z) = yn]. Then, \u2200x\u2032 \u2208 H(n) : fn(x\u2032) = yn with H(n) defined as in Eq. 2, wd = 1\u03c3d2 , \u03b7 = ( \u03a6(\u22121)(q) )2 and p = 2.\nProof. Based on the definition of the base certificate interface, we need to show that, \u2200x\u2032 \u2208 H : fn(x \u2032) = yn with\nH = { x\u2032 \u2208 RDin \u2223\u2223\u2223\u2223\u2223 Din\u2211 d=1 1 \u03c32d \u00b7 |xd \u2212 x\u2032d|2 < ( \u03a6\u22121(q) )2} . (29)\nEiras et al. (2021) have shown that under the same conditions as above, but with a general covariance matrix \u03a3 \u2208 RDin\u00d7Din+ , a prediction yn is certifiably robust to a perturbed input x\u2032 if\u221a\n(x\u2212 x\u2032)\u03a3\u22121(x\u2212 x\u2032) < 1 2\n( \u03a6\u22121(q)\u2212 \u03a6\u22121(q\u2032) ) , (30)\nwhere q\u2032 = maxy\u2032n \u0338=yn Prz\u223cN (x,\u03a3) [gn(z) = y \u2032 n] is the probability of the second most likely prediction under the smoothing distribution. Because the probabilities of all possible predictions have to sum up to 1, we have q\u2032 \u2264 1 \u2212 q. Since \u03a6\u22121 is monotonically increasing, we can obtain a lower bound on the r.h.s. of Eq. 30 and thus a more pessimistic certificate by substituting 1 \u2212 q for q\u2032 (deriving such a \u201dbinary certificate\u201d from a \u201dmulticlass certificate\u201d is common in randomized smoothing and was already discussed in (Cohen et al., 2019)):\u221a\n(x\u2212 x\u2032)\u03a3\u22121(x\u2212 x\u2032) < 1 2\n( \u03a6\u22121(q)\u2212 \u03a6\u22121(1\u2212 q) ) , (31)\nIn our case, \u03a3 is a diagonal matrix diag (\u03c3)2 with \u03c3 \u2208 RDin+ . Thus Eq. 31 is equivalent to\u221a\u221a\u221a\u221aDin\u2211 d=1 (xd \u2212 x\u2032d) 1 \u03c32d (xd \u2212 x\u2032d) < 1 2 ( \u03a6\u22121(q)\u2212 \u03a6\u22121(1\u2212 q) ) . (32)\nFinally, using the fact that \u03a6\u22121(q)\u2212\u03a6\u22121(1\u2212 q) = 2\u03a6\u22121(q) and eliminating the square root shows that we are certifiably robust if\nDin\u2211 d=1 1 \u03c32d \u00b7 |xd \u2212 x\u2032d|2 < ( \u03a6\u22121(q) )2 . (33)\nF.2 UNIFORM SMOOTHING FOR l1 PERTURBATIONS OF CONTINUOUS DATA\nAn alternative base certificate for l1 perturbations is again due to Eiras et al. (2021). Using uniform instead of Gaussian noise later allows us to collective certify robustness to l1-norm-bound perturbations. In the following U(x,\u03bb) with x \u2208 RD, \u03bb \u2208 RD+ refers to a vector-valued random distribution in which the d-th element is uniformly distributed in [xd \u2212 \u03bbd, xd + \u03bbd]. Proposition F.2. Given an output gn : RDin \u2192 Y, let f(x) = argmaxy\u2208Y Prz\u223cU(x,\u03bb) [g(z) = y] be the corresponding smoothed classifier with \u03bb \u2208 RDin+ . Given an input x \u2208 RDin and smoothed prediction y = f(x), let p = Prz\u223cU(x,\u03bb) [g(z) = y]. Then, \u2200x\u2032 \u2208 H(n) : fn(x\u2032) = yn with H(n) defined as in Eq. 2, wd = 1/\u03bbd, \u03b7 = \u03a6\u22121(q) and p = 1.\nProof. Based on the definition of H(n), we need to prove that \u2200x\u2032 \u2208 H : fn(x\u2032) = yn with\nH = { x\u2032 \u2208 RDin |\nDin\u2211 d=1 1 \u03bbd \u00b7 |xd \u2212 x\u2032d| < \u03a6\u22121(q)\n} , (34)\nEiras et al. (2021) have shown that under the same conditions as above, a prediction yn is certifiably robust to a perturbed input x\u2032 if\nDin\u2211 d=1 | 1 \u03bbd \u00b7 (xd \u2212 x\u2032d) | < 1 2 ( \u03a6\u22121(q)\u2212 \u03a6\u22121(1\u2212 q) ) , (35)\nwhere q\u2032 = maxy\u2032n \u0338=yn Prz\u223cU(x,\u03bb) [gn(z) = y \u2032 n] is the probability of the second most likely prediction under the smoothing distribution. As in our previous proof for Gaussian smoothing, we can obtain a more pessimistic certificate by substituting 1\u2212q for q\u2032. Since \u03a6\u22121(q)\u2212\u03a6\u22121(1\u2212q) = 2\u03a6\u22121(q) and all \u03bbd are non-negative, we know that our prediction is certifiably robust if\nDin\u2211 d=1 1 \u03bbd \u00b7 |xd \u2212 x\u2032d| < \u03a6\u22121(p). (36)\nF.3 VARIANCE-CONSTRAINED CERTIFICATION\nIn the following, we derive the general variance-constrained randomized smoothing certificate from Theorem 5.1, before discussing specific certificates for binary data in \u00a7 F.3.1 and \u00a7 F.3.2.\nVariance smoothing assumes that we make predictions by randomly smoothing a base model\u2019s softmax scores. That is, given base model g : X \u2192 \u2206|Y| mapping from an arbitrary discrete input space X to scores from the (|Y| \u2212 1)-dimensional probability simplex \u2206|Y|, we define the smoothed classifier f(x) = argmaxy\u2208YEz\u223c\u03a8(x) [g(z)y]. Here, \u03a8(x) is an arbitrary distribution over X parameterized by x, e.g a Normal distribution with mean x. The smoothed classifier does not return the most likely prediction, but the prediction associated with the highest expected softmax score.\nGiven an input x \u2208 X, smoothed prediction y = f(x) and a perturbed input x\u2032 \u2208 X, we want to determine whether f(x\u2032) = y. By definition of our smoothed classifier, we know that f(x\u2032) = y if y is the label with the highest expected softmax score. In particular, we know that f(x\u2032) = y if y\u2019s softmax score is larger than all other softmax scores combined, i.e.\nEz\u223c\u03a8(x\u2032) [g(z)y] > 0.5 =\u21d2 f(x\u2032) = y. (37)\nComputing Ez\u223c\u03a8(x\u2032) [g(z)y] exactly is usually not tractable \u2013 especially if we later want to evaluate robustness to many x\u2032 from a whole perturbation model B \u2286 X. Therefore, we compute a lower bound on Ez\u223c\u03a8(x\u2032) [g(z)y]. If even this lower bound is larger than 0.5, we know that prediction y is certainly robust. For this, we define a set of functions F with gy \u2208 H and compute the minimum softmax score across all functions from F:\nmin h\u2208F\nEz\u223c\u03a8(x\u2032) [h(z)] > 0.5 =\u21d2 f(x\u2032) = y. (38)\nFor our variance smoothing approach, we define F to be the set of all functions that have a larger or equal expected value and a smaller or equal variance under \u03a8(x), compared to our base model g x. Let \u00b5 = Ez\u223c\u03a8(x) [g(z)y] be the expected softmax score of our base model g for label y. Let \u03b6 = Ez\u223c\u03a8(x) [ (g(z)y \u2212 \u03bd)2 ] be the expected squared distance of the softmax score from a scalar\n\u03bd \u2208 R. (Choosing \u03bd = \u00b5 yields the variance of the softmax score. An arbitrary \u03bd is only needed for technical reasons related to Monte Carlo estimation \u00a7 G.2). Then, we define\nF = { h : X \u2192 R \u2223\u2223\u2223 Ez\u223c\u03a8(x) [h(z)] \u2265 \u00b5 \u2227 Ez\u223c\u03a8(x) [(h(z)\u2212 \u03bd)2] \u2264 \u03b6} (39) Clearly, by the definition of \u00b5 and \u03b6, we have gy \u2208 F. Note that we do not restrict functions from H to the domain [0, 1], but allow arbitrary real-valued outputs.\nBy evaluating Eq. 37 with F defined as in Eq. 38, we can determine if our prediciton is robust. To compute the optimal value, we need the following two Lemmata:\nLemma F.3. Given a discrete set X and the set \u03a0 of all probability mass functions over X, any two probability mass functions \u03c01, \u03c02 \u2208 \u03a0 fulfill\u2211\nz\u2208X\n\u03c02(z) \u03c01(z) \u03c02(z) \u2265 1. (40)\nProof. For a fixed probability mass function \u03c01, Eq. 40 is lower-bounded by the minimal expected likelihood ratio that can be achieved by another \u03c0\u0303(z) \u2208 \u03a0:\u2211\nz\u2208X\n\u03c02(z) \u03c01(z) \u03c02(z) \u2265 min \u03c0\u0303\u2208\u03a0 \u2211 z\u2208X \u03c0\u0303(z) \u03c01(z) \u03c0\u0303(z). (41)\nThe r.h.s. term can be expressed as the constrained optimization problem\nmin \u03c0\u0303 \u2211 z\u2208X \u03c0\u0303(z) \u03c01(z) \u03c0\u0303(z) s.t. \u2211 z\u2208X \u03c0\u0303(z) = 1 (42)\nwith the corresponding dual problem\nmax \u03bb\u2208R min \u03c0\u0303 \u2211 z\u2208X \u03c0\u0303(z) \u03c01(z) \u03c0\u0303(z) + \u03bb\n( \u22121 +\n\u2211 z\u2208X \u03c0\u0303(z)\n) . (43)\nThe inner problem is convex in each \u03c0\u0303(z). Taking the gradient w.r.t. to \u03c0\u0303(z) for all z \u2208 X shows that it has its minimum at \u2200z \u2208 X : \u03c0\u0303(z) = \u2212\u03bb\u03c01(z)2 . Substituting into Eq. 43 results in\nmax \u03bb\u2208R \u2211 z\u2208X \u03bb2\u03c01(z) 2 4\u03c01(z) + \u03bb\n( \u22121\u2212\n\u2211 z\u2208X \u03bb\u03c01(z) 2\n) (44)\n=max \u03bb\u2208R \u2212\u03bb2 \u2211 z\u2208X \u03c01(z) 4 \u2212 \u03bb (45)\n=max \u03bb\u2208R\n\u2212\u03bb 2\n4 \u2212 \u03bb (46)\n=1. (47)\nEq. 46 follows from the fact that \u03c01(z) is a valid probability mass function. Due to duality, the optimal dual value 1 is a lower bound on the optimal value of our primal problem Eq. 40.\nLemma F.4. Given a probability distribution D over a R and a scalar \u03bd \u2208 R, let \u00b5 = Ez\u223cD [z] and \u03be = Ez\u223cD [ (z \u2212 \u03bd)2 ] . Then \u03be \u2265 (\u00b5\u2212 \u03bd)2\nProof. Using the definitions of \u00b5 and \u03be, as well as some simple algebra, we can show:\n\u03be \u2265 (\u00b5\u2212 \u03bd)2 (48) \u21d0\u21d2 Ez\u223cD [ (z \u2212 \u03bd)2 ] \u2265 \u00b52 \u2212 2\u00b5\u03bd + \u03bd2 (49)\n\u21d0\u21d2 Ez\u223cD [ z2 \u2212 2z\u03bd + \u03bd2 ] \u2265 \u00b52 \u2212 2\u00b5\u03bd + \u03bd2 (50)\n\u21d0\u21d2 Ez\u223cD [ z2 \u2212 2z\u03bd + \u03bd2 ] \u2265 \u00b52 \u2212 2\u00b5\u03bd + \u03bd2 (51)\n\u21d0\u21d2 Ez\u223cD [ z2 ] \u2212 2\u00b5\u03bd + \u03bd2 \u2265 \u00b52 \u2212 2\u00b5\u03bd + \u03bd2 (52)\n\u21d0\u21d2 Ez\u223cD [ z2 ] \u2265 \u00b52 (53)\nIt is well known for the variance that Ez\u223cD [ (z \u2212 \u00b5)2 ] = Ez\u223cD [ z2 ] \u2212 \u00b52. Because the variance is always non-negative, the above inequality holds.\nUsing the previously described approach and lemmata, we can show the soundness of the following robustness certificate: Theorem 5.1 (Variance-constrained certification). Given a model g : X \u2192 \u2206|Y| mapping from discrete set X to scores from the (|Y| \u2212 1)-dimensional probability simplex, let f(x) = argmaxy\u2208YEz\u223c\u03a8(x) [g(z)y] be the corresponding smoothed classifier with smoothing distribution \u03a8(x) and probability mass function \u03c0x(z) = Prz\u0303\u223c\u03a8(x) [z\u0303 = z]. Given an input x \u2208 X and smoothed prediction y = f(x), let \u00b5 = Ez\u223c\u03a8(x) [g(z)y] and \u03b6 = Ez\u223c\u03a8(x) [ (g(z)y \u2212 \u03bd)2 ] with \u03bd \u2208 R. Assuming \u03bd \u2264 \u00b5, then f(x\u2032) = y if\u2211 z\u2208X \u03c0x\u2032(z) 2 \u03c0x(z) < 1 + 1 \u03b6 \u2212 (\u00b5\u2212 \u03bd)2 ( \u00b5\u2212 1 2 ) . (54)\nProof. Following our discussion above, we know that f(x\u2032) = y if Ez\u223c\u03a8(x\u2032) [g(z)y] > 0.5 with F defined as in Eq. 39. We can compute a (tight) lower bound on minh\u2208F Ez\u223c\u03a8(x\u2032) by following the functional optimization approach for randomized smoothing proposed by Zhang et al. (2020). That is, we solve a dual problem in which we optimize the value h(z) for each z \u2208 X. By the definition of the set F, our optimization problem is\nmin h:X\u2192R Ez\u223c\u03a8(x\u2032) [h(z)] (55) s.t. Ez\u223c\u03a8(x) [h(z)] \u2265 \u00b5, Ez\u223c\u03a8(x) [ (h(z)\u2212 \u03bd)2 ] \u2264 \u03b6. (56)\nThe corresponding dual problem with dual variables \u03b1, \u03b2 \u2265 0 is\nmax \u03b1,\u03b2\u22650 min h:X\u2192R Ez\u223c\u03a8(x\u2032) [h(z)]\n+\u03b1 ( \u00b5\u2212 Ez\u223c\u03a8(x) [h(z)] ) + \u03b2 ( Ez\u223c\u03a8(x) [ (h(z)\u2212 \u03bd)2 ] \u2212 \u03b6 ) .\n(57)\nWe first move move all terms that don\u2019t involve h out of the inner optimization problem:\n= max \u03b1,\u03b2\u22650 \u03b1\u00b5\u2212\u03b2\u03b6 + min h:X\u2192R\nEz\u223c\u03a8(x\u2032) [h(z)]\u2212\u03b1Ez\u223c\u03a8(x) [h(z)] +\u03b2Ez\u223c\u03a8(x) [ (h(z)\u2212 \u03bd)2 ] . (58)\nWriting out the expectation terms and combining them into one sum (or \u2013 in the case of continuous X \u2013 one integral), our dual problem becomes\n= max \u03b1,\u03b2\u22650 \u03b1\u00b5\u2212 \u03b2\u03b6 + min h:X\u2192R \u2211 z\u2208X h(z)\u03c0x\u2032(z)\u2212 \u03b1h(z)\u03c0x(z) + \u03b2 (h(z)\u2212 \u03bd)2 \u03c0x(z) (59)\n(recall that \u03c0x\u2032 and \u03c0x\u2032 refer to the probability mass functions of the smoothing distributions). The inner optimization problem can be solved by finding the optimal h(z) in each point z:\n= max \u03b1,\u03b2\u22650 \u03b1\u00b5\u2212 \u03b2\u03b6 + \u2211 z\u2208X min h(z)\u2208R h(z)\u03c0x\u2032(z)\u2212 \u03b1h(z)\u03c0x(z) + \u03b2 (h(z)\u2212 \u03bd)2 \u03c0x(z). (60)\nBecause \u03b2 \u2265 0, each inner optimization problem is convex in h(z). We can thus find the optimal h\u2217(z) by setting the derivative to zero:\nd\ndh(z) h(z)\u03c0x\u2032(z)\u2212 \u03b1h(z)\u03c0x(z) + \u03b2 (h(z)\u2212 \u03bd)2 \u03c0x(z) ! = 0 (61)\n\u21d0\u21d2 \u03c0x\u2032(z)\u2212 \u03b1\u03c0x(z) + 2\u03b2 (h(z)\u2212 \u03bd)\u03c0x(z) ! = 0 (62)\n=\u21d2 h\u2217(z) = \u2212 \u03c0x \u2032(z)\n2\u03b2\u03c0x(z) +\n\u03b1\n2\u03b2 + \u03bd. (63)\nSubstituting into Eq. 59 and simplifying leaves us with the dual problem\nmax \u03b1,\u03b2\u22650\n\u03b1\u00b5\u2212 \u03b2\u03b6 \u2212 \u03b1 2\n4\u03b2 +\n\u03b1 2\u03b2 \u2212 \u03b1\u03bd + \u03bd \u2212 1 4\u03b2 \u2211 z\u2208X \u03c0x\u2032(z) 2 \u03c0x(z) . (64)\nIn the following, let us use \u03c1 = \u2211\nz\u2208X \u03c0x\u2032 (z)\n2\n\u03c0x(z) as a shorthand for the expected likelihood ratio. The\nproblem is concave in \u03b1. We can thus find the optimum \u03b1\u2217 by setting the derivative to zero, which gives us \u03b1\u2217 = 2\u03b2(\u00b5\u2212 \u03bd) + 1. Because \u03b2 \u2265 0 and our theorem assumes that \u03bd \u2264 \u00b5, the value \u03b1\u2217 is a feasible solution to the dual problem. Substituting into Eq. 64 and simplifying results in\nmax \u03b2\u22650\n\u03b1\u2217\u00b5\u2212 \u03b2\u03b6 \u2212 \u03b1 \u22172 4\u03b2 + \u03b1\u2217 2\u03b2 \u2212 \u03b1\u2217\u03bd + \u03bd \u2212 1 4\u03b2 \u03c1 (65)\n=max \u03b2\u22650\n\u03b2 ( (\u00b5\u2212 \u03bd)2 \u2212 \u03c32 ) + \u00b5+ 1\n4\u03b2 (1\u2212 \u03c1) . (66)\nLemma F.3 shows that the expected likelihood ratio \u03c1 is always greater than or equal to 1. Lemma F.4 shows that (\u00b5\u2212 \u03bd)2 \u2212 \u03c32 \u2264 0. Therefore Eq. 66 is concave in \u03b2. The optimal value of \u03b2 can again be found by setting the derivative to zero:\n\u03b2\u2217 =\n\u221a 1\u2212 \u03c1\n4 ((\u00b5\u2212 \u03bd)2 \u2212 \u03c32) . (67)\nRecall that our theorem assumes \u03c32 \u2265 (\u00b5\u2212 \u03bd)2 and thus \u03b2\u2217 is real valued. Substituting Eq. 67 into Eq. 66 shows that the maximum of our dual problem is\n\u00b5+ \u221a (1\u2212 p) ((\u00b5\u2212 \u03bd)2 \u2212 \u03c32). (68)\nBy duality, this is a lower bound on our primal problem minh\u2208F Ez\u223c\u03a8(x\u2032) [h(z)]. We know that our prediction is certifiably robust, i.e. f(x) = y, if minh\u2208F Ez\u223c\u03a8(x\u2032) [h(z)] > 0.5. So, in particular, our prediction is robust if\n\u00b5+ \u221a (1\u2212 \u03c1) ((\u00b5\u2212 \u03bd)2 \u2212 \u03c32) > 0.5 (69)\n\u21d0\u21d2 \u03c1 < 1 + 1 \u03c32 \u2212 (\u00b5\u2212 \u03bd)2\n( \u00b5\u2212 1\n2\n)2 (70)\n\u21d0\u21d2 \u2211 z\u2208X \u03c0x\u2032(z) 2 \u03c0x(z) < 1 +\n1\n\u03c32 \u2212 (\u00b5\u2212 \u03bd)2\n( \u00b5\u2212 1\n2\n)2 (71)\nThe last equivalence is the result of inserting the definition of the expected likelihood ratio \u03c1.\nWith Theorem 5.1 in place, we can certify robustness for arbitrary smoothing distributions, assuming we can compute the expected likelihood ratio. When we are working with discrete data and the smoothing distributions factorize, this can be done efficiently, as the two following base certificates for binary data demonstrate." |
|
}, |
|
{ |
|
"heading": "F.3.1 BERNOULLI SMOOTHING FOR PERTURBATIONS OF BINARY DATA", |
|
"text": "We begin by proving the base certificate presented in \u00a7 5. Recall that we we use a smoothing distribution F(x,\u03b8) with \u03b8 \u2208 [0, 1]Din that independently flips the d\u2019th bit with probability \u03b8d, i.e. for x, z \u2208 {0, 1}Din and z \u223c F(x,\u03b8) we have Pr[zd \u0338= xd] = \u03b8d.\nCorollary F.5. Given an output gn : {0, 1}Din \u2192 \u2206|Y| mapping to scores from the (|Y| \u2212 1)- dimensional probability simplex, let fn(x) = argmaxy\u2208YEz\u223cF(x,\u03b8) [gn(z)y] be the corresponding smoothed classifier with \u03b8 \u2208 [0, 1]Din . Given an input x \u2208 {0, 1}Din and smoothed prediction yn = fn(x), let \u00b5 = Ez\u223cF(x,\u03b8) [gn(z)y] and \u03b6 = Varz\u223cF(x,\u03b8) [gn(z)y]. Then, \u2200x\u2032 \u2208 H(n) : fn(x \u2032) = yn with H(n) defined as in Eq. 2, wd = ln (\n(1\u2212\u03b8d)2 \u03b8d + (\u03b8d) 2 1\u2212\u03b8d\n) , \u03b7 = ln ( 1 + 1\u03b6 ( \u00b5\u2212 12 )2) and p = 0.\nProof. Based on our definition of the base certificates interface (see Definition 4.1, we must show that \u2200x\u2032 \u2208 H : fn(x\u2032) = yn with\nH = { x\u2032 \u2208 {0, 1}Din \u2223\u2223\u2223\u2223\u2223 Din\u2211 d=1 ln ( (1\u2212 \u03b8d)2 \u03b8d + (\u03b8d) 2 1\u2212 \u03b8d ) \u00b7 |x\u2032d \u2212 xd|0 < ln ( 1 + 1 \u03b6 ( \u00b5\u2212 1 2 )2)} , (72) Because all bits are flipped independently, our probability mass function \u03c0x(z) = Prz\u0303\u223c\u03a8(x) [z\u0303 = z] factorizes:\n\u03c0x(z) = Din\u220f d=1 \u03c0xd(zd) (73)\nwith\n\u03c0xd(zd) = { \u03b8d if zd \u0338= xd 1\u2212 \u03b8d else . (74)\nThus, our expected likelihood ratio can be written as\u2211 z\u2208{0,1}Din \u03c0x\u2032(z) 2 \u03c0x(z) = \u2211 z\u2208{0,1}Din Din\u220f d=1 \u03c0x\u2032d(zd) 2 \u03c0xd(zd) = Din\u220f d=1 \u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) 2 \u03c0xd(zd) . (75)\nFor each dimension d, we can distinguish two cases: If both the perturbed and unperturbed input are the same in dimension d, i.e. x\u2032d = xd, then \u03c0x\u2032 d (z)\n\u03c0xd (z) = 1 and thus\u2211\nzd\u2208{0,1}\n\u03c0x\u2032d(zd) 2\n\u03c0xd(zd) = \u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) = \u03b8d + (1\u2212 \u03b8d) = 1. (76)\nIf the perturbed and unperturbed input differ in dimension d, then\u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) 2 \u03c0xd(zd) = (1\u2212 \u03b8d)2 \u03b8d + (\u03b8d) 2 1\u2212 \u03b8d . (77)\nTherefore, the expected likelihood ratio is Din\u220f d=1 \u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) 2 \u03c0xd(zd) = Din\u220f d=1 ( (1\u2212 \u03b8d)2 \u03b8d + (\u03b8d) 2 1\u2212 \u03b8d )|x\u2032d\u2212xd| . (78)\nDue to Theorem 5.1 (and using \u03bd = \u00b5 when computing the variance), we know that our prediction is robust, i.e. fn(x\u2032) = yn, if\u2211\nz\u2208{0,1}Din\n\u03c0x\u2032(z) 2\n\u03c0x(z) < 1 +\n1\n\u03b6\n( \u00b5\u2212 1\n2\n)2 (79)\n\u21d0\u21d2 Din\u220f d=1\n( (1\u2212 \u03b8d)2\n\u03b8d +\n(\u03b8d) 2\n1\u2212 \u03b8d\n)|x\u2032d\u2212xd| < 1 + 1\n\u03b6\n( \u00b5\u2212 1\n2\n)2 (80)\n\u21d0\u21d2 Din\u2211 d=1 ln\n( (1\u2212 \u03b8d)2\n\u03b8d +\n(\u03b8d) 2\n1\u2212 \u03b8d\n) |x\u2032d \u2212 xd| < ln ( 1 + 1\n\u03b6\n( \u00b5\u2212 1\n2\n)2) . (81)\nBecause xd and x\u2032d are binary, the last inequality is equivalent to Din\u2211 d=1 ln ( (1\u2212 \u03b8d)2 \u03b8d + (\u03b8d) 2 1\u2212 \u03b8d ) |x\u2032d \u2212 xd|0 < ln ( 1 + 1 \u03b6 ( \u00b5\u2212 1 2 )2) . (82)" |
|
}, |
|
{ |
|
"heading": "F.3.2 SPARSITY-AWARE SMOOTHING FOR PERTURBATIONS OF BINARY DATA", |
|
"text": "Sparsity-aware randomized smoothing (Bojchevski et al., 2020) is an alternative smoothing approach for binary data. It uses different probabilities for randomly deleting (1 \u2192 0) and adding (0 \u2192 1) bits to preserve data sparsity. For a random variable z distributed according to the sparsity-aware distribution S(x,\u03b8+,\u03b8\u2212) with x \u2208 {0, 1}Din and addition and deletion probabilities \u03b8+,\u03b8\u2212 \u2208 [0, 1]Din , we have:\nPr[zd = 0] = ( 1\u2212 \u03b8+d )1\u2212xd \u00b7 (\u03b8\u2212d )xd , Pr[zd = 1] = ( \u03b8+d )1\u2212xd \u00b7 (1\u2212 \u03b8\u2212d )xd .\nThe Bernoulli smoothing distribution we discussed in the previous section is a special case of sparsity-aware smoothing with \u03b8+ = \u03b8\u2212. The runtime of the robustness certificate derived by Bojchevski et al. (2020) increases exponentially with the number of unique values in \u03b8+ and \u03b8\u2212, which makes it unsuitable for localized smoothing. Variance-constrained smoothing, on the other hand, allows us to efficiently compute a certificate in closed form.\nCorollary F.6. Given an output gn : RDin \u2192 \u2206|Y| mapping to scores from the (|Y| \u2212 1)- dimensional probability simplex, let fn(x) = argmaxy\u2208YEz\u223cS(x,\u03b8+,\u03b8\u2212) [gn(z)y] be the corresponding smoothed classifier with \u03b8+,\u03b8\u2212 \u2208 [0, 1]Din . Given an input x \u2208 {0, 1}Din and smoothed prediction yn = fn(x), let \u00b5 = Ez\u223cS(x,\u03b8+,\u03b8\u2212) [gn(z)y] and \u03b6 = Varz\u223cS(x,\u03b8+,\u03b8\u2212) [gn(z)y]. Then, \u2200x\u2032 \u2208 H : fn(x\u2032) = y for\nH = { x\u2032 \u2208 {0, 1}Din |\nDin\u2211 d=1 \u03b3+d \u00b7 I [xd = 0 \u0338= x \u2032 d] + \u03b3 \u2212 d \u00b7 I [xd = 1 \u0338= x \u2032 d] < \u03b7\n} , (83)\nwhere \u03b3+,\u03b3\u2212 \u2208 RDin , \u03b3+d = ln ( (\u03b8\u2212d ) 2\n1\u2212\u03b8+d +\n(1\u2212\u03b8\u2212d ) 2\n\u03b8+d\n) , \u03b3\u2212d = ln ( (1\u2212\u03b8+d ) 2\n\u03b8\u2212d +\n(\u03b8+d ) 2 1\u2212\u03b8\u2212d .\n) and \u03b7 =\nln ( 1 + 1\u03b6 ( \u00b5\u2212 12 )2) .\nProof. Just like with the Bernoulli distribution we discussed in the previous section, all bits are flipped independently, meaning our probability mass function \u03c0x(z) = Prz\u0303\u223c\u03a8(x) [z\u0303 = z] factorizes:\n\u03c0x(z) = Din\u220f d=1 \u03c0xd(zd) (84)\nwith\n\u03c0xd(zd) = { \u03b8d if zd \u0338= xd 1\u2212 \u03b8d else . (85)\nAs before, our expected likelihood ratio can be written as\n\u2211 z\u2208{0,1}Din \u03c0x\u2032(z) 2 \u03c0x(z) = \u2211 z\u2208{0,1}Din Din\u220f d=1 \u03c0x\u2032d(zd) 2 \u03c0xd(zd) = Din\u220f d=1 \u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) 2 \u03c0xd(zd) . (86)\nWe can now distinguish three cases. If both the perturbed and unperturbed input are the same in dimension d, i.e. x\u2032d = xd, then \u03c0x\u2032 d (z)\n\u03c0xd (z) = 1 and thus\n\u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) 2 \u03c0xd(zd) = \u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) = 1. (87)\nIf x\u2032d = 1 and xd = 0, i.e. a bit was added, then\u2211 zd\u2208{0,1} \u03c0x\u2032d(z) 2 \u03c0xd(z) = \u2211 zd\u2208{0,1} \u03c01(zd) 2 \u03c00(zd) = \u03c01(0) 2 \u03c00(0) + \u03c01(1) 2 \u03c00(1) = ( \u03b8\u2212d )2 1\u2212 \u03b8+d + ( 1\u2212 \u03b8\u2212d )2 \u03b8+d\n(88)\nIf x\u2032d = 0 and xd = 1, i.e. a bit was deleted, then\u2211 zd\u2208{0,1} \u03c0x\u2032d(z) 2 \u03c0xd(z) = \u2211 zd\u2208{0,1} \u03c00(zd) 2 \u03c01(zd) = \u03c00(0) 2 \u03c01(0) + \u03c00(1) 2 \u03c01(1) = ( 1\u2212 \u03b8+d )2 \u03b8\u2212d + ( \u03b8+d )2 1\u2212 \u03b8\u2212d . (89)\nTherefore, the expected likelihood ratio is\nDin\u220f d=1 \u2211 zd\u2208{0,1} \u03c0x\u2032d(zd) 2 \u03c0xd(zd) (90)\n= Din\u220f d=1\n( ( \u03b8\u2212d )2\n1\u2212 \u03b8+d +\n( 1\u2212 \u03b8\u2212d )2 \u03b8+d )I[xd=0 \u0338=x\u2032d|](( 1\u2212 \u03b8+d )2 \u03b8\u2212d + ( \u03b8+d )2 1\u2212 \u03b8\u2212d )I[xd=1\u0338=x\u2032d|] (91)\n= Din\u220f d=1 exp ( \u03b3+d )I[xd=0\u0338=x\u2032d|] \u00b7 exp (\u03b3\u2212d )I[xd=1\u0338=x\u2032d|] . (92)\nIn the last equation, we have simply used the shorthands \u03b3+d and \u03b3 \u2212 d defined in Corollary F.6. Due to Theorem 5.1 (and using \u03bd = \u00b5 when computing the variance), we know that our prediction is robust, i.e. fn(x\u2032) = yn, if\u2211\nz\u2208{0,1}Din\n\u03c0x\u2032(z) 2\n\u03c0x(z) < 1 +\n1\n\u03b6\n( \u00b5\u2212 1\n2\n)2 (93)\n\u21d0\u21d2 Din\u220f d=1 exp ( \u03b3+d )I[xd=0 \u0338=x\u2032d|] \u00b7 exp (\u03b3\u2212d )I[xd=1\u0338=x\u2032d|] < 1 + 1\u03b6 ( \u00b5\u2212 1 2 )2 (94)\n\u21d0\u21d2 Din\u2211 d=1 \u03b3+d \u00b7 I [xd = 0 \u0338= x \u2032 d|] \u00b7 \u03b3\u2212d \u00b7 I [xd = 1 \u0338= x \u2032 d|] < ln\n( 1 + 1\n\u03b6\n( \u00b5\u2212 1\n2\n)2) . (95)\nIt should be noted that this certificate does not comply with our interface for base certificates (see Definition 4.1), meaning we can not directly use it to certify robustness to norm-bound perturbations using our collective linear program from Theorem 4.2. We can however use it to certify collective robustness to the more refined threat model used in (Schuchardt et al., 2021): Let the set of admissible perturbed inputs be Bx ={ x\u2032 \u2208 {0, 1}Din | \u2211Din d=1 [xd = 0 \u0338= x\u2032d|] \u2264 \u03f5+ \u2227 \u2211Din d=1 [xd = 1 \u0338= x\u2032d|] \u2264 \u03f5\u2212 } with \u03f5+, \u03f5y \u2208 N0 specifying the number of bits the adversary is allowed to add or delete. We can now follow the procedure outlined in \u00a7 3.2 to combine the per-prediction base certificates into a collective certificate for our new collective perturbation model. As discussed in, we can bound the number of predictions that are robust to simultaneous attacks by minimizing the number of predictions that are certifiably robust according to their base certificates:\nmin x\u2032\u2208Bx \u2211 n\u2208T I [fn(x \u2032) = yn] \u2265 min x\u2032\u2208Bx \u2211 n\u2208T I [ x\u2032 \u2208 H(n) ] . (96)\nInserting the linear inequalities characterizing our perturbation model and base certificates results in:\nmin x\u2032\u2208{0,1}Din \u2211 n\u2208T I [ Din\u2211 d=1 \u03b3+d \u00b7 I [xd = 0 \u0338= x \u2032 d] + \u03b3 \u2212 d \u00b7 I [xd = 1 \u0338= x \u2032 d] < \u03b7 (n) ] (97)\ns.t. Din\u2211 d=1 [xd = 0 \u0338= x\u2032d|] \u2264 \u03f5+, Din\u2211 d=1 [xd = 1 \u0338= x\u2032d|] \u2264 \u03f5\u2212. (98)\nInstead of optimizing over the perturbed input x\u2032, we can define two vectors b+, b\u2212 \u2208 {0, 1}Din that indicate in which dimension bits were added or deleted. Using these new variables, Eq. 97 can be rewritten as\nmin b+,b\u2212\u2208{0,1}Din \u2211 n\u2208T I [( \u03b3+ )T b+ + ( \u03b3\u2212 )T b\u2212 < \u03b7(n) ] (99)\ns.t. sum{b+} \u2264 \u03f5+, sum{b\u2212} \u2264 \u03f5\u2212, (100)\u2211 d|xd=1 b+d = 0, \u2211 d|xd=0 b\u2212d = 0. (101)\nThe last two constraints ensure that bits can only be deleted where xd = 1 and bits can only be added where xd = 0. Finally, we can use the procedure for replacing the indicator functions with indicator variables that we discussed in \u00a7 D to restate the above problem as the mixed-integer problem\nmin b+,b\u2212\u2208{0,1}Din ,t\u2208{0,1}Dout \u2211 n\u2208T tn (102)\ns.t. ( \u03b3+ )T b+ + ( \u03b3\u2212 )T b\u2212 \u2265 (1\u2212 tn)\u03b7(n), (103)\nsum{b+} \u2264 \u03f5+, sum{b\u2212} \u2264 \u03f5\u2212, (104)\u2211 d|xd=1 b+d = 0, \u2211 d|xd=0 b\u2212d = 0. (105)\nThe first constraint ensures that tn can only be set to 0 if the l.h.s. is greater or equal \u03b7n, i.e. only when the base certificate can no longer guarantee robustness. The efficiency of the certificate can be improved by applying any of the techniques discussed in \u00a7 E." |
|
}, |
|
{ |
|
"heading": "G MONTE CARLO RANDOMIZED SMOOTHING", |
|
"text": "To make predictions and certify robustness, randomized smoothing requires computing certain properties of the distribution of a base model\u2019s output, given an input smoothing distribution. For example, the certificate of Cohen et al. (2019) assumes that the smoothed model f predicts the most likely label output by base model g, given a smoothing distribution N (0, \u03c3 \u00b7 1): f(x) = argmaxy\u2208Y Prz\u223cN (0,\u03c3\u00b71) [g(x+ z) = y]. To certify the robustness of a smoothed prediction y = f(x) for a specific input x, we have to compute the probability q = Prz\u223cN (0,\u03c3\u00b71) [g(x+ z) = y] to then calculate the maximum certifiable radius \u03c3\u03a6\u22121(q) with standard-normal inverse CDF \u03a6\u22121. For complicated models like deep neural networks, computing such properties in closed form is usually not tractable. Instead, they have to be estimated using Monte Carlo sampling. The result are predictions and certificates that only hold with a certain probability.\nRandomized smoothing with Monte Carlo sampling usually consists of three distinct steps:\n1. First, a small number of samples N1 from the smoothing distribution are used to generate a candidate prediction y\u0302, e.g. the most frequently predicted class.\n2. Then, a second round of N2 samples is taken and a statistical test is used to determine whether the candidate prediction is likely to be the actual prediction of smoothed classifier f , i.e. whether y\u0302 = f(x) with a certain probability (1 - \u03b11). If this is not the case, one has to abstain from making a prediction (or generate a new candidate prediction).\n3. To certify the robustness of prediction y\u0302, a final round of N3 samples is taken to estimate all quantities needed for the certificate.\nIn the case of (Cohen et al., 2019), we need to estimate the probability q = Prz\u223cN (0,\u03c3\u00b71) [g(x+ z) = y\u0302] to compute the certificate \u03c3\u03a6\u22121(q), whose strength is monotonically increasing in q. To ensure that the certificate holds with high probability (1 - \u03b12), we have to compute a probabilistic lower bound q \u2264 q. Instead of performing two separate round of sampling, one can also re-use the same samples for the abstention test and certification. One particularly simple abstention mechanism is to just compute the Monte Carlo randomized smoothing certificate to determine whether \u2200x\u2032 \u2208 {x} : f(x\u2032) = y\u0302 with high probability, i.e. whether the prediction is robust to input x\u2032 that is the result of \u201dperturbing\u201d clean input x with zero adversarial budget.\nIn the following, we discuss how we perform Monte Carlo randomized smoothing for our base certificates, as well as the baselines we use for our experimental evaluation. In \u00a7 G.4, we discuss how we account for the multiple comparisons problem, i.e. the fact that we are not just trying to probabilistically certify a single prediction, but multiple predictions at once." |
|
}, |
|
{ |
|
"heading": "G.1 MONTE CARLO BASE CERTIFICATES FOR CONTINUOUS DATA", |
|
"text": "For our base certificates for continuous data, we follow the approach we already discussed in the previous paragraphs (recall that the certificate of Cohen et al. (2019) is a special case of our certificate with Gaussian noise for l2 perturbations). We are given an input space XDin , label space Y, base model (or \u2013 in the case of multi-output classifiers \u2013 base model output) g : XDin \u2192 Y and smoothing distribution \u03a8(x) (either multivariate Gaussian or multivariate uniform). To generate a candidate prediction, we apply the base classifier to N1 samples from the smoothing distribution in order to obtain predictions ( y(1), . . . , y(N1) ) and compute the majority prediction\ny\u0302 = argmaxy\u2208Y { n | y(n) = y\u0302 } . Recall that for Gaussian and uniform noise, our certificate guarantees \u2200x\u2032 \u2208 H : f(x) = y\u0302 for\nH = { x\u2032 \u2208 XDin \u2223\u2223\u2223\u2223\u2223 Din\u2211 d=1 wd \u00b7 |x\u2032d \u2212 xd|p < \u03b7 } ,\nwith \u03b7 = ( \u03a6\u22121(q) )2 or \u03b7 = \u03a6\u22121(q) (depending on the distribution), q = Prz\u223cN (0,\u03c3\u00b71) [g(x+ z) = y\u0302] and standard-normal inverse CDF \u03a6\u22121. To obtain a probabilistic certificate that holds with high probability 1\u2212\u03b1, we need a probabilistic lower bound on \u03b7. Both \u03b7 are monotonically increasing in q, i.e. we can bound them by finding a lower bound q on q. For this, we\ntake N2 more samples from the smoothing distribution and compute a Clopper-Pearson lower confidence bound (Clopper & Pearson, 1934) on q. For abstentions, we use the aforementioned simple mechanism: We test whether x \u2208 H. Given the definition of H, this is equivalent to testing whether\n0 < \u03a6\u22121(q)\n\u21d0\u21d2 \u03a6(0) < q \u21d0\u21d2 0.5 < q.\nIf q \u2264 0.5, we abstain." |
|
}, |
|
{ |
|
"heading": "G.2 MONTE CARLO VARIANCE-CONSTRAINED CERTIFICATION", |
|
"text": "For variance-constrained certification, we smooth a model\u2019s softmax scores. That is, we are given an input space XDin , label space Y, base model (or \u2013 in the case of multi-output classifiers \u2013 base model output) g : XDin \u2192 \u2206|Y| with (|Y| \u2212 1)-dimensional probability simplex \u2206|Y| and smoothing distribution \u03a8(x) (Bernoullli or sparsity-aware noise, in the case of binary data). To generate a candidate prediction, we apply the base classifier to N1 samples from the smoothing distribution in order to obtain vectors ( s(1), . . . , s(N1) ) with s \u2208 \u2206|Y|, compute the average softmax scores\ns = 1N1 \u2211N n=1 s and select the label with the highest score y\u0302 = argmaxy sy .\nRecall that our certificate guarantees robustness if the optimal value of the following optimization problem is greater than 0.5:\nmin h:X\u2192R Ez\u223c\u03a8(x\u2032) [h(z)] (106) s.t. Ez\u223c\u03a8(x) [h(z)] \u2265 \u00b5, Ez\u223c\u03a8(x) [ (h(z)\u2212 \u03bd)2 ] \u2264 \u03b6, (107)\nwith \u00b5 = Ez\u223c\u03a8(x) [g(z)y\u0302], \u03b6 = Ez\u223c\u03a8(x) [ (g(z)y\u0302 \u2212 \u03bd)2 ] and a fixed scalar \u03bd \u2208 R. To obtain a\nprobabilistic certificate, we have to compute a probabilistic lower bound on the optimal value of the optimization problem. Because it is a minimization problem, this can be achieved by loosening its constraints, i.e. computing a probabilistic lower bound \u00b5 on \u00b5 and a probabilistic upper bound \u03b6 on \u03b6.\nLike in CDF-smoothing (Kumar et al., 2020), we bound the parameters using CDF-based nonparametric confidence intervals. Let F (s) = Prz\u223c\u03a8(x) [g(z)y\u0302 \u2264 s] be the CDF of gy\u0302(Z) with Z \u223c \u03a8(x). Define M thresholds \u2264 0\u03c41 \u2264 \u03c42 . . . , \u03c4M\u22121 \u2264 \u03c4M \u2264 1 with \u2200m : \u03c4m \u2208 [0, 1]. We then take N2 samples x(1), . . . ,x(N2) from the smoothing distribution to compute the empirical CDF F\u0303 (s) = \u2211N2 n=1 I [ g(z(n))y\u0302 \u2264 s ] . We can then use the Dvoretzky-Keifer-Wolfowitz inequality (Dvoretzky et al., 1956) to compute an upper bound F\u0302 and a lower bound F on the CDF of gy\u0302:\nF (s) = max ( F\u0303 (s)\u2212 \u03c5, 0 ) \u2264 F (s) \u2264 min ( F\u0303 (s) + \u03c5, 1 ) = F (s), (108)\nwith \u03c5 = \u221a\nln 2/\u03b1 2\u00b7N2 , which holds with high probability (1\u2212 \u03b1). Using these bounds on the CDF, we\ncan bound \u00b5 = Ez\u223c\u03a8(x) [g(z)y\u0302] as follows (Anderson, 1969):\n\u00b5 \u2265 \u03c4M \u2212 \u03c41F (\u03c41) + M\u22121\u2211 m=1 (\u03c4m+1 \u2212 \u03c4m)F (\u03c4m). (109)\nThe parameter \u03b6 = Ez\u223c\u03a8(x) [ (g(z)y\u0302 \u2212 \u03bd)2 ] can be bounded in a similar fashion. Define\n\u03be0, . . . , \u03beM \u2208 R+ with:\n\u03be0 = max \u03ba\u2208[0,\u03c41]\n( (\u03ba\u2212 \u03bd)2 ) \u03beM = max\n\u03ba\u2208[\u03c4M ,1]\n( (\u03ba\u2212 \u03bd)2 ) \u03bem = max\n\u03ba\u2208[\u03c4m,\u03c4m+1]\n( (\u03ba\u2212 \u03bd)2 ) \u2200m \u2208 {1, . . . ,M \u2212 1},\n(110)\ni.e. compute the maximum squared distance to \u03bd within each bin [\u03c4m, \u03c4m+1]. Then:\n\u03b6 \u2264 \u03be0F (\u03c41) + \u03beM (1\u2212 F (\u03c4M )) + M\u22121\u2211 m=1 \u03bem (F (\u03c4m+1 \u2212 F (\u03c4m)) (111)\n= \u03beM + M\u22121\u2211 m=1 (\u03bem\u22121 \u2212 \u03bem)F (\u03c4m) (112)\n\u2264 \u03beM + M\u22121\u2211 m=1 (\u03bem\u22121 \u2212 \u03bem) ( sgn (\u03bem\u22121 \u2212 \u03bem)F (\u03c4m) + (1\u2212 sgn (\u03bem\u22121 \u2212 \u03bem))F (\u03c4m) ) (113)\nwith probability (1\u2212 \u03b1). In the first inequality, we bound the expected squared distance from \u03bd by assuming that the probability mass in each bin [\u03c4m, \u03c4m+1] is concentrated at the farthest point from \u03bd. The equality is a result of reordering the telescope sum. In the second inequality, we upper-bound the CDF where it is multiplied with a non-negative value and lower-bound it where it is multiplied with a negative value.\nWith the probabilistic bounds \u00b5 and \u03b6 we can now \u2013 in principle \u2013 evaluate our robustness certificate, i.e. check whether \u2211\nz\u2208X\n\u03c0x\u2032(z) 2\n\u03c0x(z) < 1 +\n1 \u03b6 \u2212 ( \u00b5\u2212 \u03bd )2 (\u00b5\u2212 12 )2 . (114)\nwhere the \u03c0 are the probability mass functions of smoothing distributions \u03a8(x) and \u03a8(x\u2032). But one crucial detail of Theorem 5.1 underlying the certificate was that it only holds for \u03bd \u2264 \u00b5, i.e. only when this condition is fulfilled can we compute the certificate in closed form by solving the corresponding dual problem. To use the method with Monte Carlo sampling, one has to ensure that \u03bd \u2264 \u00b5 by first computing \u00b5 and then choosing some smaller \u03bd.\nIn our experiments, we use an alternative method that allows us to use arbitrary \u03bd: From our proof of Theorem 5.1we know that the dual problem of Eq. 106 is\nmax \u03b1,\u03b2\u22650\n\u03b1\u00b5\u2212 \u03b2\u03b6 \u2212 \u03b1 2\n4\u03b2 +\n\u03b1 2\u03b2 \u2212 \u03b1\u03bd + \u03bd \u2212 1 4\u03b2 \u2211 z\u2208X \u03c0x\u2032(z) 2 \u03c0x(z) , (115)\nInstead of trying to find an optimal \u03b1 (which causes problems in subsequent derivations if \u03bd \u2270 \u00b5), we can simply choose \u03b1 = 1. By duality, the result is still a lower bound on the primal problem, i.e. the certificate remains valid. The dual problem becomes\nmax \u03b2\u22650 \u00b5\u2212 \u03b2\u03b6 + 1 4\u03b2 \u2212 1 4\u03b2 \u2211 z\u2208X \u03c0x\u2032(z) 2 \u03c0x(z) . (116)\nThe problem is concave in \u03b2 (because the expected likelihood ratio is \u2265 1). Finding the optimal \u03b2, comparing the result to 0.5 and solving for the expected likelihood ratio, shows that a prediction is robust if \u2211\nz\u2208X\n\u03c0x\u2032(z) 2\n\u03c0x(z) < 1 +\n1\n\u03b6\n( \u00b5\u2212 1\n2\n)2 . (117)\nFor our abstention mechanism, like in the previous section, we compute the certificate H and then test whether x \u2208 H. In the case of Bernoulli smoothing and sparsity-aware smoothing), this corresponds to testing whether\n1 < ln ( 1 + 1\n\u03b6\n( \u00b5\u2212 1\n2\n)) (118)\n\u21d0\u21d2 \u00b5 > 1 2 . (119)" |
|
}, |
|
{ |
|
"heading": "G.3 MONTE CARLO CENTER SMOOTHING", |
|
"text": "While we can not use center smoothing as a base certificate, we benchmark our method against it during our experimental evaluation. The generation of candidate predictions, the abstention mechanism and the certificate are explained in (Kumar & Goldstein, 2021). The authors allow multiple\noptions for generating candidate predictions. We use the \u201d\u03b2 minimum enclosing ball\u201d with \u03b2 = 2 that is based on pair-wise distance calculations." |
|
}, |
|
{ |
|
"heading": "G.4 MULTIPLE COMPARISONS PROBLEM", |
|
"text": "The first step of our collective certificate is to compute one base certificate for each of the Dout predictions of the multi-output classifier. With Monte Carlo randomized smoothing, we want all of these probabilistic certificates to simultaneously hold with a high probability (1 \u2212 \u03b1). But as the number of certificates increases, so does the probability of at least one of them being invalid. To account for this multiple comparisons problem, we use Bonferroni (Bonferroni, 1936) correction, i.e. compute each Monte Carlo certificate such that it holds with probability (1\u2212 \u03b1n ). For base certificates that only depend on qn = Prz\u223c\u03a8(n) [gn(z) = y\u0302n], i.e. the probability of the base classifier predicting a particular label y\u0302n under the smoothing distribution, one can also use the strictly better Holm correction (Holm, 1979). This includes our Gaussian and uniform smoothing certificates for continuous data. Holm correction is a procedure than can be used to correct for the multiple comparisons problem when performing multiple arbitrary hypothesis tests. Given N hypotheses, their p-values are ordered in ascending order p1, . . . , pN . Starting at i = 1, the i\u2019th hypothesis is rejected if pi < \u03b1N+1\u2212i , until one reaches an i such that pi \u2265 \u03b1 N+1\u2212i .\nFischer et al. (2021) proposed to use Holm correction as part of their procedure for certifying that all (non-abstaining) predictions of an image segmentation model are robust to adversarial perturbations. In the following, we first summarize their approach and then discuss how Holm correction can be used for certifying our notion of collective robustness, i.e. certifying the number of robust predictions. As in \u00a7 G.1, the goal is to obtain a lower bound q\nn on qn = Prz\u223c\u03a8(n) [gn(z) = y\u0302n] for\neach of theDout classifier outputs. Assume we takeN2 samples z(1), . . . ,z(N2) from the smoothing distribution. Let \u03bdn = \u2211N2 i=1 I [ gn(z (i)) = y\u0302n ]\nand let \u03c0 : {1, . . . , Dout} \u2192 {1, . . . , Dout} be a bijection that orders the \u03bdn in descending order, i.e. \u03bd\u03c0(1) \u2265 \u03bd\u03c0(2) \u00b7 \u00b7 \u00b7 \u2265 \u03bd\u03c0(Dout). Instead of using Clopper-Pearson confidence intervals to obtain tight lower bounds on the qn, Fischer et al. (2021) define a threshold \u03c4 \u2208 [0.5, 1) and use Binomial tests to determine for which n the bound \u03c4 \u2264 qn holds with high-probability. Let BinP (\u03bdn, N2,\u2264, \u03c4) be the p-value of the one-sided binomial test, which is monotonically decreasing in \u03bdn. Following the Holm correction scheme, the authors test whether\nBinP ( \u03bd\u03c0(k), N2,\u2264, \u03c4 ) <\n\u03b1\nDout + 1\u2212 k (120)\nfor k = 1, . . . , Dout until reaching a k\u2217 for which the null-hypothesis can no longer be rejected, i.e. the p-value is g.e.q. \u03b1Dout+1\u2212k\u2217 . They then know that with probability 1\u2212 \u03b1, the bound \u03c4 \u2264 qn holds for all n \u2208 {\u03c0(k) | k \u2208 {1, . . . , k\u2217}. For these outputs, they use the lower bound \u03c4 to compute robustness certificates. They abstain with all other outputs.\nThis approach is sensible when one is concerned with the least robust prediction from a set of predictions. But our collective certificate benefits from having tight robustness guarantees for each of the individual predictions. Holm correction can be used with arbitrary hypothesis tests. For instance, we can use a different threshold \u03c4n per output gn, i.e. test whether\nBinP ( \u03bd\u03c0(k), N2,\u2264, \u03c4\u03c0(k) ) <\n\u03b1\nDout + 1\u2212 k (121)\nfor k = 1, . . . , Dout. In particular, we can use\n\u03c4n = sup t\ns.t. BinP (\u03bdn, N2,\u2264, t) < \u03b1\nDout + 1\u2212 \u03c0\u22121(n) , (122)\ni.e. choose the largest threshold such that the null hypothesis can still be rejected. Eq. 122 is the lower Clopper-Pearson confidence bound with significance \u03b1Dout+1\u2212\u03c0\u22121(n) . This means that, instead of performing hypothesis tests, we can obtain probabilistic lower bounds q\nn \u2264 qn by computing\nClopper-Pearson confidence bounds with significance parameters \u03b1Dout , . . . , \u03b1 1 . The qn can then be used to compute the base certificates. Due to the definition of the \u03c4n, all of the null hypotheses are rejected, i.e. we obtain valid probabilistic lower bounds on all qn. We can thus use the abstention mechanism from \u00a7 G.1, i.e. only abstain if q\nn \u2264 0.5.\nIn our experiments (see \u00a7 7), we use Holm correction for the na\u0131\u0308ve isotropic randomized smoothing baselines and the weaker Bonferroni correction for our localized smoothing base certificates. This\nis only meant to slightly skew the results in favor of our baselines. Holm correction can in principle also be used when computing the base certificates, in order to improve our proposed collective cert.\nH COMPARISON TO THE COLLECTIVE CERTIFICATE OF FISCHER ET AL. (2021)\nOur collective certificate based on localized smoothing is designed to bound the number of simultaneously robust predictions. Fischer et al. (2021) designed SegCertify to determine whether all predictions are simultaneously robust. As discussed in \u00a7 3.2, their work is based on the na\u0131\u0308ve collective certification approach applied to isotropic Gaussian smoothing: They first certify each output independently, then count the number of certifiably robust predictions for a specific adversarial budget and then test whether the number of certifiably robust predictions equals the overall number of predictions. To obtain better guarantees in practical scenarios, they further propose to\n\u2022 use Holm correction to address the multiple comparisons problem (see \u00a7 G.4), \u2022 Abstain at a higher rate to avoid \u201cbad componets\u201d, i.e. predictions yn that have a low\nconsistency qn = Prz\u223cN (x,\u03c3) [g(z) = y] and thus very small certifiable radii.\nA more technical summary of their method can be found in \u00a7 G.4.\nIn the following, we discuss why our certificate can always offer guarantees that are at least as strong as SegCertify, both for our notion of collective robustness (number of robust predictions) and their notion of collective robustness (robustness of all predictions). In short, isotropic smoothing is a special case of localized smoothing and Holm correction can also be used for our base certificates. Before proceedings, please read the discussion on Monte Carlo base certificates and Clopper-Pearson confidence intervals in \u00a7 G.1 and the multiple comparisons problem in \u00a7 G.4.\nA direct consequence of the results in \u00a7 G.4 is that using Clopper-Pearson confidence intervals and Holm correction will yield stronger per-prediction robustness guarantees and lower abstention rates than the method of Fischer et al. (2021). The Clopper-Pearson-based method only abstains if one cannot guarantee that qn > 0.5 with high probability, while their method abstains if one cannot guarantee that qn \u2265 \u03c4 with \u03c4 \u2265 0.5 (or specific other predictions abstain). For all non-abstaining predictions, the Clopper-Pearson-based certificate will be at least as strong as the one obtained using a single threshold \u03c4 , as it computes the tightest bound for which the null hypothesis can still be rejected (see Eq. 122).\nConsequently, when certifying our notion of collective robustness, i.e. determining the number of robust predictions given adversarial budget \u03f5, a na\u0131\u0308ve collective robustness certificate (i.e. counting the number of predictions whose robustness are guaranteed by the base certificates) based on Clopper-Pearson bounds will also be stronger than the method of Fischer et al. (2021). It should however be noted that their method could potentially be used with other methods of family-wise error rate correction, although they state that \u201cthese methods do not scale to realistic segmentation problems\u201d and do not discuss any further details.\nConversely, when certifying their notion of collective robustness, i.e. determining whether all nonabstaining predictions are robust given adversarial budget \u03f5, the certificate based on Clopper-Pearson confidence bounds is also at least as strong as that of Fischer et al. (2021). To certify their notion of robustness, they simply iterate over all predictions and determine whether all non-abstaining predictions are certifiably robust, given \u03f5. Naturally, as the Clopper-Pearson-based certificates are stronger, any prediction that is robust according to (Fischer et al., 2021) is also robust acccording to the Clopper-Pearson-based certificates. The only difference is that, for \u03c4 > 0.5, their method will have more abstaining predictions. But, due to the direct correspondence of Clopper-Pearson confidence bounds and Binomial tests, we can modify our abstention mechanism to obtain exactly the same set of abstaining predictions: We simply have to use q\nn \u2264 \u03c4 instead of qn \u2264 0.5 as our\nabstention criterion.\nFinally, it should be noted that our proposed collective certificate based on linear programming is at least as strong as the na\u0131\u0308ve collective certificate (see Eq. 1.1 and Eq. 1.2 in \u00a7 3.2). Thus, letting the set of targeted predictions T be the set of all non-abstaining predictions and checking whether the collective certificate guarantees robustness for all of T will also result in a certificate that is at least as strong as that of Fischer et al. (2021) in their setting.\nI COMPARISON TO THE COLLECTIVE CERTIFICATE OF SCHUCHARDT ET AL. (2021)\nIn the following, we first present the collective certificate for binary graph-structured data proposed by Schuchardt et al. (2021) (see \u00a7 I.1. We then show that, when using sparsity-aware smoothing distributions (Bojchevski et al., 2020) \u2013 the family of smoothing distributions used both in our work and that of Schuchardt et al. (2021) \u2013 our certificate subsumes their certificate. That is, our collective robustness certificate based on localized randomized smoothing can provide the same robustness guarantees (see \u00a7 I.2)." |
|
}, |
|
{ |
|
"heading": "I.1 THE COLLECTIVE CERTIFICATE", |
|
"text": "Their certificate assumes the input space to be G = {0, 1}N\u00d7D \u00d7{0, 1}N\u00d7N \u2013 the set of undirected attributed graphs with N nodes and D attributes per node. The model is assumed to be a multioutput classifier f : G \u2192 YN that assigns a label from label set Y to each of the nodes. Given an input graph G = (X,A) and a corresponding prediction y = f(G), they want to certify collective robustness to a set of perturbed graphs B \u2286 G. The perturbation model B is characterized by four scalar parameters r+X , r \u2212 X , r + A, r + A \u2208 N0, specifying the number of bits the adversary is allowed to add (0 \u2192 1) and delete (1 \u2192 0) in the attribute and adjacency matrix, respectively. It can also be extended to feature additional constraints (e.g. per-node budgets). We discuss how these can be integrated after showing our main result. A formal definition of the perturbation model can be found in Section B of (Schuchardt et al., 2021).\nThe goal of their work is to certify collective robustness for a set of targeted nodes T \u2286 {1, . . . , N}, i.e. compute a lower bound on\nmin G\u2032\u2208B \u2211 n\u2208T I [fn(G \u2032) = yn] . (123)\nTheir approach to obtaining this lower-bound shares the same high-level idea as ours (see \u00a7 3.2): Combining per-prediction base certificates and leveraging some notion of locality. But while our method uses localized randomized smoothing, i.e. smoothing different outputs with different noni.i.d. smoothing distributions to obtain base certificates that encode locality, their method uses apriori knowledge about the strict locality of the classifier f . A model is strictly local if each of its outputs fn only operates on a well-defined subset of the input data. To encode this strict locality, Schuchardt et al. (2021) associate each output fn with an indicator vector \u03c8(n) and an indicator matrix \u03a8(n) that fulfill\nN\u2211 m=1 D\u2211 d=1 \u03c8(n)m I [ Xm,d \u0338= X \u2032i,j ] + N\u2211 i=1 N\u2211 j=1 \u03a8(n)m I [ Am,d \u0338= A\u2032i,j ] = 0\n=\u21d2 fn(X,A) = fn(X \u2032,A\u2032).\n(124)\nfor any perturbed graph G\u2032 = (X \u2032,A\u2032). Eq. 124 expresses that the prediction of output fn remains unchanged if all inputs in its receptive field remain unchanged. Conversely, it expresses that perturbations outside the receptive field can be ignored. Unlike in our work, Schuchardt et al. (2021) describe their base certificates as sets in adversarial budget space. That is, some certification procedure is applied to each output fn to obtain a set\nK(n) \u2286 [r+X ]\u00d7 [r \u2212 X ]\u00d7 [r + A]\u00d7 [r \u2212 X ] (125)\nwith [k] = {0, . . . , k}. If [ c+X c \u2212 X c + A c \u2212 A ]T \u2208 K(n), then prediction yn is robust to any perturbed input with exactly c+X attribute additions, c \u2212 X attribute deletions, c + A edge additions and c \u2212 A edge deletions. A more detailed explanation can be found in Section 3 of (Schuchardt et al., 2021). Note that the base certificates only depend on the number of perturbations, not their location in the input. Only combining them using the receptive field indicators from Eq. 124 makes it possible to obtain a collective certificate that is better than the na\u0131\u0308ve collective certificate (i.e. counting how many predictions are certifiably robust to the collective threat model). The resulting collective\ncertificate is\nmin b+,b+,B+,B\u2212 \u2211 n\u2208T I [[( \u03c8(n) )T b+X ( \u03c8(n) )T b\u2212X \u2211 i,j \u03a8 (n) i,j B + i,j \u2211 i,j \u03a8 (n) i,j B \u2212 i,j ]T \u2208 K(n) ] (126)\ns.t. N\u2211\nm=1\nb+m \u2264 r+X , N\u2211\nm=1 b\u2212m \u2264 r\u2212X , N\u2211 i=1 N\u2211 j=1 B+i,j \u2264 r + A, N\u2211 i=1 N\u2211 j=1 B\u2212i,j \u2264 r \u2212 A, (127)\nb+, b\u2212 \u2208 NN0 B+,B\u2212 \u2208 NN\u00d7N0 . (128)\nThe variables defined in Eq. 128 model how the adversary allocates their adversarial budget, i.e. how many attributes are perturbed per node and which edges are modified. Eq. 127 ensures that this allocation in compliant with the collective threat model. Finally, in Eq. 126 the indicator vector and matrix \u03c8(n) and \u03a8(n) are used to mask out any allocated perturbation budget that falls outside the receptive field of fn before evaluating its base certificate.\nTo solve the optimization problem, Schuchardt et al. (2021) replace each of the indicator functions with binary variables and include additional constraints to ensure that they have value 1 i.f.f. the indicator function would have value 1. To do so, they define one linear constraint per point separating the set of certifiable budgets K(n) from its complement K(n) in adversarial budget space (the \u201dpareto front\u201d discussed in Section 3 of (Schuchardt et al., 2021)).\nFrom the above explanation, the main drawbacks of this collective certificate compared to our localized randomized smoothing approach and corresponding collective certificate should be clear. Firstly, if the classifier f is not strictly local, i.e. the receptive field indicators \u03c8 and \u03a8 only have non-zero entries, then all base certificates are evaluated using the entire collective adversarial budget. It thus degenerates to the na\u0131\u0308ve collective certificate. Secondly, even if the model is strictly local, each of the output may assign varying levels of importance to different parts of its receptive field. Their method is incapable of capturing this additional soft locality. Finally, their means of evaluating the base certificates may involve evaluating a large number of linear constraints. Our method, on the other hand, only requires a single constraint per prediction. Our collective certificate can thus be more efficiently computed." |
|
}, |
|
{ |
|
"heading": "I.2 PROOF OF SUBSUMPTION", |
|
"text": "In the following, we show that any robustness certificate obtained by using the collective certificate of Schuchardt et al. (2021) with sparsity-aware randomized smoothing base certificates can also be obtained by using our proposed collective certificate with an appropriately parameterized localized smoothing distribution. The fundamental idea is that, for randomly smoothed models, completely randomizing all input dimensions outside the receptive field is equivalent to masking out any perturbations outside the receptive field.\nFirst, we derive the certificate of Schuchardt et al. (2021) for predictions obtained via sparsityaware smoothing. Schuchardt et al. (2021) require base certificates that guarantee robustness when[ c+X c \u2212 X c + A c \u2212 A ]T \u2208 K(n), where the c indicate the number of added and deleted attribute and adjacency bits. That is, the certificates must only depend on the number of perturbations, not on their location. To achieve this, all entries of the attribute matrix and all entries of the adjacency matrix, respectively, must share the same distribution. For the attribute matrix, they define scalar distribution parameters p+X , p \u2212 A \u2208 [0, 1]. Given attribute matrix X \u2208 {0, 1}N\u00d7D, they then sample random attribute matrices ZX that are distributed according to sparsity-aware smoothing distribution S ( X,1 \u00b7 p+X ,1 \u00b7 p \u2212 X ) (see \u00a7 F.3.2), i.e.\nPr[(ZX)m,d = 0] = ( 1\u2212 p+X )1\u2212Xm,d \u00b7 (p\u2212X)Xm,d , Pr[(ZX)m,d = 1] = ( p+X )1\u2212Xm,d \u00b7 (1\u2212 p\u2212X)Xm,d .\nGiven input adjacency matrix A, random adjacency matrices ZA are sampled from the distribution S ( A,1 \u00b7 p+A,1 \u00b7 p \u2212 A ) . Applying Corollary F.6 (to the flattened and concatenated attribute and adjacency matrices) shows that smoothed prediction yn = fn(X,A) is robust to the perturbed graph\n(X \u2032,A\u2032) if N\u2211\nm=1 D\u2211 d=1 \u03b3+X \u00b7 I [ Xm,d = 0 \u0338= X \u2032m,d ] + \u03b3\u2212X \u00b7 I [ Xm,d = 1 \u0338= X \u2032m,d ] +\nN\u2211 i=1 N\u2211 i=1 \u03b3+A \u00b7 I [ Ai,j = 0 \u0338= A\u2032i,j ] + \u03b3\u2212A \u00b7 I [ Ai,j = 1 \u0338= A\u2032i,j ] < \u03b7(n)\n(129)\nwith \u03b3+X = ln ( (p\u2212X) 2\n1\u2212p+X +\n(1\u2212p\u2212X) 2\np+X\n) , \u03b3\u2212X = ln ( (1\u2212p+X) 2\np\u2212X +\n(p+X) 2 1\u2212p\u2212X .\n) , \u03b3+A =\nln\n( (p\u2212A) 2\n1\u2212p+A +\n(1\u2212p\u2212A) 2\np+A\n) , \u03b3\u2212A = ln ( (1\u2212p+A) 2\np\u2212A +\n(p+A) 2 1\u2212p\u2212A .\n) and \u03b7(n) = ln ( 1 + 1\n\u03c3(n)2 ( \u00b5(n) \u2212 12 )2) ,\nwhere \u00b5(n) is the mean and \u03c3(n) is the variance of the base classifier\u2019s output distribution, given the input smoothing distribution. Since the indicator functions for each perturbation type in Eq. 129 share the same weights, Eq. 129 can be rewritten as\n\u03b3+Xc + X + \u03b3 \u2212 Xc \u2212 X + \u03b3 + Ac + A + \u03b3 \u2212 Ac \u2212 A \u2264 \u03b7 (n), (130)\nwhere c+X , c \u2212 X , c + A, c \u2212 A are the overall number of added and deleted attribute and adjacency bits, respectively. Eq. 130 matches the notion of base certificates defined by Schuchardt et al. (2021), i.e. it corresponds to a set K(n) in adversarial budget space for which we provably know that prediction yn is certifiably robust if [ c+X c \u2212 X c + A c \u2212 A\n]T \u2208 K(n). When we insert the base certificate Eq. 130 into objective function Eq. 126, the collective certificate of Schuchardt et al. (2021) becomes equivalent to\nmin b+,b+,B+,B\u2212 \u2211 n\u2208T I [ \u03b3+X ( \u03c8(n) )T b+X + \u03b3 \u2212 X ( \u03c8(n) )T b\u2212X\n+\u03b3+A \u2211 i,j \u03a8 (n) i,j B + i,j + \u2211 i,j \u03b3\u2212A\u03a8 (n) i,j B \u2212 i,j \u2264 \u03b7 (n)\n] (131)\ns.t. N\u2211\nm=1\nb+m \u2264 r+X , N\u2211\nm=1 b\u2212m \u2264 r\u2212X , N\u2211 i=1 N\u2211 j=1 B+i,j \u2264 r+A, N\u2211 i=1 N\u2211 j=1 B\u2212i,j \u2264 r\u2212A, (132)\nb+, b\u2212 \u2208 NN0 B+,B\u2212 \u2208 NN\u00d7N0 . (133)\nNext, we show that obtaining base certificates through localized randomized smoothing with appropriately chosen parameters and using these base certificates within our proposed collective certificate (see Theorem 4.2) will result in the same optimization problem. Instead of using the same smoothing distribution for all outputs, we use different distribution parameters for each one. For the n\u2019th output, we sample random attributes matrices from distribution S ( X,\u0398+X (n) ,\u0398\u2212X (n) ) with\n\u0398+X (n) ,\u0398\u2212X (n) \u2208 [0, 1]N\u00d7D. Note that, in order to avoid having to index flattened vectors, we overload the definition of sparsity-aware smoothing to allow for matrix-valued parameters. For example, the value \u0398+X (n) n,d indicates the probability of flipping the value of input attribute Xn,d from 0 to 1 and the value \u0398\u2212X (n)\nn,d indicates the probability of flipping the value of input attribute Xn,d from 1 to 0. We choose the following values for these parameters:\n\u0398+X (n) m,d = \u03c8 (n) m \u00b7 p+X + ( 1\u2212 \u03c8(n)m ) \u00b7 0.5, (134)\n\u0398\u2212X (n) m,d = \u03c8 (n) m \u00b7 p\u2212X + ( 1\u2212 \u03c8(n)m ) \u00b7 0.5, (135)\nwhere \u03c8(n) is the receptive field indicator vector defined in Eq. 124 and p+X , \u00b7p \u2212 X \u2208 [0, 1] are the same flip probabilities we used for the certificate of Schuchardt et al. (2021). Due to this parameterization, attribute bits inside the receptive field are randomized using the same distribution as in the certificate of Schuchardt et al. (2021), while attribute bits outside are set to either\n0 or 1 with equal probability. Similarly, we sample random adjacency matrices from distribution S ( A,\u0398+A (n) ,\u0398\u2212A (n) ) with \u0398+A (n) ,\u0398\u2212A (n) \u2208 [0, 1]N\u00d7D and\n\u0398+A (n) i,j = \u03a8 (n) i,j \u00b7 p + A + ( 1\u2212\u03a8(n)i,j ) \u00b7 0.5, (136)\n\u0398\u2212A (n) u,j = \u03a8 (n) i,j \u00b7 p \u2212 A + ( 1\u2212\u03a8(n)i,j ) \u00b7 0.5, (137)\nwhere \u03a8(n) is the receptive field indicator matrix defined in Eq. 124. Note that, since we only alter the distribution of bits outside the receptive field, the smoothed prediction yn = fn(X,A) will be the same as the one obtained via the smoothing distribution used by Schuchardt et al. (2021). Applying Corollary F.6 (to the flattened and concatenated attribute and adjacency matrices) shows that smoothed prediction yn = fn(X,A) is robust to the perturbed graph (X \u2032,A\u2032) if\nN\u2211 m=1 D\u2211 d=1 \u03c4+Xm,d \u00b7 I [ Xm,d = 0 \u0338= X \u2032m,d ] + \u03c4\u2212Xm,d \u00b7 I [ Xm,d = 1 \u0338= X \u2032m,d ] +\nN\u2211 i=1 N\u2211 j=1 \u03c4+A i,j \u00b7 I [ Ai,j = 0 \u0338= A\u2032i,j ] + \u03c4\u2212A i,j \u00b7 I [ Ai,j = 1 \u0338= A\u2032i,j ] < \u03b7(n).\n(138)\nBecause we only changed the distribution outside the receptive field, the scalar \u03b7(n), which depends on the output distribution\u2019s mean and variance \u00b5 and \u03c3 will be the same as the one obtained via the smoothing scheme used by Schuchardt et al. (2021) et al. Due to Corollary F.6 and the definition of our smoothing distribution parameters in Eqs. (134) to (137) , the scalars \u03c4+Xm,d, \u03c4 \u2212 Xm,d, \u03c4 + A i,j , \u03c4 \u2212 A i,j have the following values:\n\u03c4+Xm,d = \u03c8 (n) m \u00b7 \u03b3+X + ( 1\u2212 \u03c8(n)m ) \u00b7 2 \u00b7 ln\n( (1\u2212 0.5)2\n0.5 +\n0.52\n1\u2212 0.5\n) (139)\n\u03c4\u2212Xm,d = \u03c8 (n) m \u00b7 \u03b3\u2212X + ( 1\u2212 \u03c8(n)m ) \u00b7 2 \u00b7 ln\n( (1\u2212 0.5)2\n0.5 +\n0.52\n1\u2212 0.5\n) (140)\n\u03c4\u2212A i,j = \u03a8 (n) i,j \u00b7 \u03b3 + A + ( 1\u2212\u03a8(n)i,j ) \u00b7 2 \u00b7 ln\n( (1\u2212 0.5)2\n0.5 +\n0.52\n1\u2212 0.5\n) (141)\n\u03c4\u2212A i,j = \u03a8 (n) i,j \u00b7 \u03b3 \u2212 A + ( 1\u2212\u03a8(n)i,j ) \u00b7 2 \u00b7 ln\n( (1\u2212 0.5)2\n0.5 +\n0.52\n1\u2212 0.5\n) , (142)\nwhere the \u03b3 are the same weights as those of the base certificate Eq. 129 of Schuchardt et al. (2021). Inserting the above values of \u03c4 into the base certificate Eq. 138 and using the fact that ln (\n(1\u22120.5)2 0.5 + 0.52 1\u22120.5\n) = ln(1) = 0 results in\nN\u2211 m=1 D\u2211 d=1 \u03c8(n)m \u00b7 \u03b3+X \u00b7 I [ Xm,d = 0 \u0338= X \u2032m,d ] + \u03c8(n)m \u00b7 \u03b3\u2212X \u00b7 I [ Xm,d = 1 \u0338= X \u2032m,d ] +\nN\u2211 i=1 N\u2211 j=1 \u03a8 (n) i,j \u00b7 \u03b3 \u2212 A \u00b7 I [ Ai,j = 0 \u0338= A\u2032i,j ] +\u03a8 (n) i,j \u00b7 \u03b3 \u2212 A \u00b7 I [ Ai,j = 1 \u0338= A\u2032i,j ] < \u03b7(n).\n(143)\nWhile our collective certificate derived in \u00a7 4 only considers one perturbation type, we have already discussed how to certify robustness to perturbation models where there are multiple perturbation types in \u00a7 F.3.2: We use a different budget variable per input dimension and perturbation type. Furthermore, the attribute bits of each node share the same noise level. Therefore, we can use the method discussed in \u00a7 E.3, i.e. use a single budget variable per node instead of using one per node\nand attribute. Modelling our collective problem in this way, using Eq. 143 as our base certificates and rewriting the first two sums using inner products results in the optimization problem\nmin b+,b+,B+,B\u2212 \u2211 n\u2208T I [ \u03b3+X ( \u03c8(n) )T b+X + \u03b3 \u2212 X ( \u03c8(n) )T b\u2212X\n+\u03b3+A \u2211 i,j \u03a8 (n) i,j B + i,j + \u2211 i,j \u03b3\u2212A\u03a8 (n) i,j B \u2212 i,j \u2264 \u03b7 (n)\n] (144)\ns.t. N\u2211\nm=1\nb+m \u2264 r+X , N\u2211\nm=1 b\u2212m \u2264 r\u2212X , N\u2211 i=1 N\u2211 j=1 B+i,j \u2264 r+A, N\u2211 i=1 N\u2211 j=1 B\u2212i,j \u2264 r\u2212A, (145)\nb+, b\u2212 \u2208 NN0 B+,B\u2212 \u2208 NN\u00d7N0 . (146)\nThis optimization problem is identical to that of Schuchardt et al. (2021) from Eqs. (131) to (133). The only difference is in how these problems would be mapped to a mixed-integer linear program. We would directly model the indicator functions in the objective using a single linear constraint. Schuchardt et al. (2021) would use multiple linear constraints, each corresponding to one point in the adversarial budget space.\nTo summarize: For randomly smoothed models, masking out perturbations using a-priori knowledge about a model\u2019s strict locality is equivalent to completely randomizing (here: flipping bits with probability 50%) parts of the input. While Schuchardt et al. (2021) only derived their certificate for binary data, it is conceivable that their approach could be applied to strictly local models for continuous data. Considering our certificates for Gaussian (Proposition F.1) and uniform (Proposition F.2) smoothing, where the base certificate weights are 1\u03c32 and 1 \u03bb , respectively, it should also be possible to perform the same masking operation as Schuchardt et al. (2021) by using \u03c3 \u2192 \u221e and \u03bb\u2192 \u221e. Finally, it should be noted that the certificate by Schuchardt et al. (2021) allows for additional constraints, e.g. on the adversarial budget per node or the number of nodes controlled by the adversary. As all of them can be modelled using linear constraints on the budget variables (see Section C of their paper), they can be just as easily integrated into our mixed-integer linear programming certificate." |
|
} |
|
], |
|
"year": 2022, |
|
"abstractText": "Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). Collective robustness certification is the task of provably bounding the number of robust predictions under this threat model. The only dedicated method that goes beyond certifying each output independently is limited to strictly local models, where each prediction is associated with a small receptive field. We propose a more general collective robustness certificate for all types of models and further show that this approach is beneficial for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. Localized smoothing Paretodominates existing certificates on both image segmentation and node classification tasks, simultaneously offering higher accuracy and stronger guarantees.", |
|
"creator": "LaTeX with hyperref" |
|
}, |
|
"output": [ |
|
[ |
|
"1. The scale of the experiments in this work is rather small. Certifying only 50 images from Pascal and CityScapes might be not enough to draw conclusions on. I would suggest extending the results on larger amount of samples. Also, some fine-grained experimental analysis on instances containing different number of categories could also be insightful.", |
|
"2. The second contribution of this work claims an efficient anisotropic randomized smoothing certificate. However, this problem was previously explored in the literature, as dictated in the related work, by Eiras et.al. Efficiency analysis between different approaches would strengthen this contribution. Further and since the experiments are done on few number of samples, how does the proposed method compare to ANCER from Eiras et.al when combined with other baselines?", |
|
"3. The writing of the paper along with the presented notation need improvements in some parts of this work. For example, small paragraphs discussing both Theorem 4.2 and 5.1 would ease the reading of this work.", |
|
"4. Further, in theorem 4.2, the prediction is referred to as yn while in the third paragraph of section 4 it is y." |
|
], |
|
[ |
|
"1. Some aspects of inference are unclear (see below)." |
|
], |
|
[] |
|
], |
|
"review_num": 3, |
|
"item_num": [ |
|
4, |
|
1, |
|
0 |
|
] |
|
} |