Reza8848's picture
Track large files with Git LFS
837b615
{
"ID": "0YXmOFLb1wQ",
"Title": "MotifExplainer: a Motif-based Graph Neural Network Explainer",
"Keywords": "Graph Neural Networks, Explainer, Motif",
"URL": "https://openreview.net/forum?id=0YXmOFLb1wQ",
"paper_draft_url": "/references/pdf?id=vyhKI62X3E",
"Conferece": "ICLR_2023",
"track": "Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)",
"acceptance": "Reject",
"review_scores": "[['3', '6', '4'], ['3', '5', '3'], ['2', '3', '5'], ['3', '6', '4'], ['3', '5', '3']]",
"input": {
"source": "CRF",
"title": "MotifExplainer: a Motif-based Graph Neural Network Explainer",
"authors": [],
"emails": [],
"sections": [
{
"heading": "1 INTRODUCTION",
"text": "Graph neural networks (GNNs) have shown capability in solving various challenging tasks in graph fields, such as node classification, graph classification, and link prediction. Although many GNNs models (Kipf & Welling, 2016; Gao et al., 2018; Xu et al., 2018; Gao & Ji, 2019; Liu et al., 2020) have achieved state-of-the-art performances in various tasks, they are still considered black boxes and lack sufficient knowledge to explain them. Inadequate interpretation of GNN decisions severely hinders the applicability of these models in critical decision-making contexts where both predictive performance and interpretability are critical. A good explainer allows us to debate GNN decisions and shows where algorithmic decisions may be biased or discriminated against. In addition, we can apply precise explanations to other scientific research like fragment generation. A fragment library is a key component in drug discovery, and accurate explanations may help its generation.\nSeveral methods have been proposed to explain GNNs, divided into instance-level explainers and model-level explainers. Most existing instance-level explainers such as GNNExplainer (Ying et al., 2019), PGExplainer (Luo et al., 2020), Gem (Lin et al., 2021), and ReFine (Wang et al., 2021) produce an explanation to every graph instance. These methods explain pre-trained GNNs by identifying important edges or nodes but fail to consider substructures, which are more important for graph data. The only method that considers subgraphs is SubgraphX (Yuan et al., 2021), which searches all possible subgraphs and identifies the most significant one. However, the subgraphs identified may not be recurrent or statistically important, which raises an issue on the application of the produced explanations. For example, fragment-based drug discovery (FBDD)(Erlanson et al., 2004) has been proven to be powerful for developing potent small-molecule compounds. FBDD is based on fragment libraries, containing fragments or motifs identified as relevant to the target property by domain experts. Using a motif-based GNN explainer, we can directly identify relevant fragments or motifs that are ready to be used when generating drug-like lead compounds in FBDD.\nIn addition, searching and scoring all possible subgraphs is time-consuming and inefficient. We claim that using motifs, recurrent and statistically important subgraphs, to explain GNNs can provide a more intuitive explanation than methods based on nodes, edges, or subgraphs.\nThis work proposes a novel GNN explanation method named MotifExplainer, which can identify significant motifs to explain an instance graph. In particular, our method first extracts motifs from a given graph using domain-specific motif extraction rules based on domain knowledge. Then, motif embeddings of extracted motifs are generated by feeding motifs into the target GNN model. After that, an attention model is employed to select relevant motifs based on attention weights. These selected motifs are used as an explanation for the target GNN model on the instance graph. To our knowledge, the proposed method represents the first attempt to apply the attention mechanism to explain the GNN from the motif-level perspective. We evaluate our method using both qualitative and quantitative experiments. The experiments show that our MotifExplainer can generate a better explanation than previous GNN explainers. In addition, the efficiency studies demonstrate the efficiency advantage of our methods in terms of a much shorter training and inference time."
},
{
"heading": "2 PROBLEM FORMULATION",
"text": "This section formulates the problem of explanations on graph neural networks. Let Gi = {V,E} \u2208 G = {G1, G2, ..., Gi, ..., GN} donate a graph where V = {v1, v2, ..., vi, ...vn} is the node set of the graph and E is the edge set. Gi is associated with a d-dimensional set of node features X = {x1,x2, ...,xi, ...,xn}, where xi \u2208 Rd is the feature vector of node vi. Without loss of generality, we consider the problem of explaining a GNN-based downstream classification task. For a node classification task, we associate each node vi of a graph G with a label yi, where yi \u2208 Y = {l1, ..., lc} and c is the number of classes. For a graph classification task, each graph Gi is assigned a corresponding label."
},
{
"heading": "2.1 BACKGROUND ON GRAPH NEURAL NETWORKS",
"text": "Most Graph Neural Networks (GNNs) follow a neighborhood aggregation learning scheme. In a layer \u2113, GNNs contain three steps. First, a GNN first calculates the messages that will be transferred between every node pair. A message for a node pair (vi, vj) can be represented by a function \u03b8(\u00b7) : b\u2113ij = \u03b8(x \u2113\u22121 i ,x \u2113\u22121 j , eij), where eij is the edge feature vector, x \u2113\u22121 i and x \u2113\u22121 j are the node features of vi and vj at the previous layer, respectively. Second, for each node vi, GNN aggregates all messages from its neighborhood Ni using an aggregation function \u03d5(\u00b7) : B\u2113i = \u03d5 ( {b\u2113ij |vj \u2208 Ni} ) . Finally, the GNN combine the aggregated message B\u2113i with node vi\u2019s feature representation from previous layer x\u2113\u22121i , and use a non-linear activation function to obtain the representation for node vi at layer l : x\u2113i = f(x \u2113\u22121 i ,B \u2113 i ). Formally, a \u2113-th GNN layer can be represented by\nx\u2113i = f ( x\u2113\u22121i , \u03d5 ({ \u03b8 ( xl\u22121i ,x l\u22121 j , eij )} | vj \u2208 Ni} )) ."
},
{
"heading": "2.2 GRAPH NEURAL NETWORK EXPLANATIONS",
"text": "In a GNN explanation task, we are given a pre-trained GNN model, which can be represented by \u03a8(\u00b7) and its corresponding dataset D. The task is to obtain an explanation model \u03a6(\u00b7) that can provide a fast and accurate explanation for the given GNN model. Most existing GNN explanation approaches can be categorized into two branches: instance-level methods and model-level methods. Instance-level methods can provide an explanation for each input graph, while model-level methods are input-independent and analyze graph patterns without input data. Following previous works (Luo et al., 2020; Yuan et al., 2021; Lin et al., 2021; Wang et al., 2021; Bajaj et al., 2021), we focus on instance-level methods with explanations using graph sub-structures. Also, our approach is modelagnostic. In particular, given an input graph, our explanation model can generate a subgraph that is the most important to the outcomes of a pre-trained GNN on any downstream graph-related task, such as graph classification tasks."
},
{
"heading": "3 MOTIF-BASED GRAPH NEURAL NETWORK EXPLAINER",
"text": "Most existing GNN explainers (Ying et al., 2019; Luo et al., 2020) identify the most important nodes or edges. SubgraphX (Yuan et al., 2021) is the first work that proposed a method to explain GNN models by generating the most significant subgraph for an input graph. However, the subgraphs\nidentified by SubgraphX may not be recurrent or statistically important. This section proposes a novel GNN explanation method, named MotifExplainer, to explain GNN models based on motifs."
},
{
"heading": "3.1 FROM SUBGRAPH TO MOTIF EXPLANATION",
"text": "Unlike explanation on models for text and image tasks, a graph has non-grid topology structure information, which needs to be considered in an explanation model. Given an input graph and a trained GNN model, most existing GNN explainers such as GNNExplainer (Ying et al., 2019) and PGExplainer (Luo et al., 2020) identify important edges and construct a subgraph containing all those edges as the explanation of the input graph. However, these models ignore the interactions between edges or nodes and implicitly measure the essence of substructures. To address this limitation, SubgraphX (Yuan et al., 2021) proposed to employ subgraphs for GNN explanation. It explicitly evaluates subgraphs and considers the interaction between different substructures. However, it does not use domain knowledge like motif information when generating the subgraphs.\nA motif can be regarded as a simple subgraph of a complex graph, which repeatedly appears in graphs and is highly related to the function of the graph. Motifs have been extensively studied in many fields, like biochemistry, ecology, neurobiology, and engineering (Milo et al., 2002; ShenOrr et al., 2002; Alon, 2007; 2019) and are proved to be important. A subgraph identified without considering domain knowledge can be ineffective for downstream tasks like fragment library generation in FBDD. Thus, it is desirable to introduce statistically important motif information to a more human-understandable GNN explanation. In addition, subgraph-based explainers like SubgraphX need to handle a large searching space, which leads to efficiency issues when generating explanations for dense or large scale graphs. In contrast, the number of the extracted motifs can be constrained by well-designed motif extraction rules, which means that using motifs as explanations can significantly reduce the search space. Another limitation of SubgraphX is that it needs to pre-determine a maximum number of nodes for its searching space. As the number of nodes in graphs varies greatly, it is hard to set a proper number for searching subgraphs. A large number will tremendously increase the computational resources, while a small number can limit the power of the explainer. To address the limitations of subgraph-based explainers, we propose a novel method that explicitly select important motifs as an explanation for a given graph. Compared to explainers based on subgraphs, our method generates explanations with motifs, which are statistically important and more human-understandable."
},
{
"heading": "3.2 MOTIF EXTRACTION",
"text": "This section introduces domain-specific motif extraction rules.\nAlgorithm 1 MotifExplainer for graph classification tasks Input: a set of graphs G, labels for graphs Y = {y1, ..., yi, ..., yn}, a pre-trained GNN \u03a8(\u00b7), a pre-trained classifier \u03be(\u00b7), motif extraction rule R Initialization: initial a trainable weight matrix W for graph Gi in G do\nGraph embedding j = \u03a8(Gi) Create motif list M = {m1, ...,mj , ...,mt} based on extraction rule R Generate motif embedding for each motif mj = \u03a8(mj) Obtain an output score for each motif sj = mj \u00b7W \u00b7 h Train an attention weight for each motif \u03b1j =\nexp(sj)\u2211t k=1 exp(sk)\nAcquire an alternative graph embedding h\u2032 = \u2211t\nk=1 \u03b1kmk Output a prediction for the alternative graph embedding y\u0302i = \u03be(h\u2032) Calculate loss based on yi and y\u0302i : loss = f(y, y\u2032) Update weight W .\nend for\nDomain knowledge. When working with data from different domains, motifs are extracted based on specific domain knowledge. For example, in biological networks, feed-forward loop, bifan, singleinput, and multi-input motifs are popular motifs, which have shown to have different properties and functions (Alon, 2007; Mangan & Alon, 2003; Gorochowski et al., 2018). For graphs or networks in the engineering domain, the three-node feedback loop (Leite & Wang, 2010) and four-node feedback loop motifs (Piraveenan et al., 2013) are important in addition to the feed-forward loop and bifan motifs. Motifs have also been shown to be important in computational Chemistry (Yu & Gao, 2022). The structures of these motifs are illustrated in Appendix C.\nExtraction methods. For molecule datasets, we can use sophisticated decomposition methods like RECAP (Lewell et al., 1998) and BRICS (Degen et al., 2008) algorithms to extract motifs. For other datasets that do not have mature extraction methods like biological networks and social networks, inspired by related works on graph feature representation learning (Yu & Gao, 2022; Bouritsas et al., 2022), we propose a general extraction method in Appendix B that only considers cycles and edges as motifs, which can cover most popular network motifs. Our methods can be easily applied to other domains by changing the motif extraction rules accordingly.\nComputational graph. We define the computational graph of a given graph based on different tasks. The computational graph includes all nodes and edges contributing to the prediction. Since most GNNs follow a neighborhood-aggregation scheme, the computational graph usually depends on the architecture of GNNs, such as the number of layers. In graph classification tasks, all nodes and edges contribute to the final prediction. Thus, a graph itself is its computational graph in graph classification tasks. For node classification tasks, a target node\u2019s computational graph is the L-hop subgraph centered on the target node, where L is the number of GNN layers. Here, we only consider motifs in the computational graph since those outside it are irrelevant to the predictions.\nMotif extraction. Given a graph G, we extract all motifs based on the motif extraction method. If a motif has been extracted from the graph, it is added to a motif list M. After searching the whole graph, there may be edges not in any motif. We regard each of them as a one-edge motif and add them to the motif list to retain the integrity of the graph information. At last, we can obtain the motif list M = [m1,m2, . . . ,mt] in G."
},
{
"heading": "3.3 MOTIF EMBEDDING",
"text": "After extracting motifs M from a given graph, we encode the feature representations for each motif. Given a pre-trained GNN model, we split it into two parts: a feature extractor \u03a8(\u00b7) and a classifier \u03be(\u00b7). The feature extractor \u03a8(\u00b7) generates an embedding for the prediction target. In particular, \u03a8(\u00b7) outputs graph embeddings in graph classification tasks, and outputs node embeddings in node classification tasks. The motif embedding is obtained in a graph classification task by feeding all motif node embeddings into a readout function. While in a node classification task, motif embedding encodes the influence of the motif on the node embedding of the target node. Thus, we feed the target node k and a motif mj \u2208 M as a subgraph into the GNN feature extractor \u03a8(\u00b7) and use the\nresulting target node embedding of k as the embedding of the motif. To ensure the connectivity of the subgraph, we keep edges from the target node to the motif and mask features of irrelevant nodes."
},
{
"heading": "3.4 GNN EXPLANATION FOR GRAPH CLASSIFICATION TASKS",
"text": "This section introduces how to generate an explanation for a pre-trained GNN model in a graph classification task. We split the pre-trained GNN model into a feature extractor \u03a8(\u00b7) and a classifier \u03be(\u00b7). Given a graph G, its original graph embedding h is computed as h = \u03a8(G). The prediction y is computed by y = \u03be(h).\nBased on the given graph, our method extracts a motif list from it and generates motif embedding M = [m1,m2, . . . ,mt] using the pre-trained feature extractor \u03a8(\u00b7). Since the original graph embedding is directly related to the predictions, we identify the most important motifs by investigating relationships between the original graph embedding and motif embeddings. To this end, we employ an attention layer, which uses the original graph embedding h = \u03a8(G) as query and motif embedding M as keys and values. The output of the attention layer is considered as a new graph embedding h\u2032. We interpret the attentions scores as the strengths of relationships between the prediction and motifs. Thus, highly relevant motifs will contribute more to the new graph embedding. By feeding the new graph embedding h\u2032 into the pre-trained graph classifier \u03be(\u00b7), a new prediction y\u2032 = \u03be(h\u2032) is obtained. The loss based on y and y\u2032 evaluates the contribution of selected motifs to the final prediction, which trains the attention layer such that important motifs are selected to produce similar predictions to the original graph embedding. Formally, this explanation process can be represented as\nh = \u03a8(G), y = \u03be(h), (1) M = [m1,m2, . . . ,mt] = MotifExtractor(G), (2)\nM = [m1,m2, . . . ,mt] = [\u03a8(mi)] t i=1, (3) h\u2032 = Attn(h,M ,M), (4)\ny\u2032 = \u03be(h\u2032), (5)\nloss = f(y, y\u2032), (6)\nwhere Attn is an attention layer and f is a loss function. After training, we use the attention scores to identify important motifs. To our knowledge, our work first attempts to use the attention mechanism for GNN explanation.\nDuring testing, we use a threshold \u03c3/t to select important motifs, where \u03c3 is a hyper-parameter and t is the number of motifs extracted. The explanation includes the motifs whose attention scores are larger than the threshold. Algorithm 1 describes our GNN explanation method on graph classification tasks. In addition, we provide an illustration of the proposed MotifExplainer in Figure 1."
},
{
"heading": "3.5 GNN EXPLANATION FOR NODE CLASSIFICATION TASKS",
"text": "This section introduces how to generate an explanation for a node classification task. Given a graph G and a target node vi, we first construct a computational graph for vi, which is an L-hop subgraph as described in Section 3.2. Then we extract motifs from the computational graph and generate motif embedding for each motif using the feature extractor \u03a8(\u00b7). After that, the proposed MotifExplainer employs an attention layer to identify important motifs. The attention layer for node classification tasks is similar to the one for graph classification tasks, except that the query is the embedding of the target node. A node embedding is generated by feeding the whole graph into the feature extractor \u03a8(\u00b7). The target node\u2019s output feature vector hi is used as the query vector in the attention layer, which outputs the new node embedding h\u2032i. Similarly, the new prediction y\n\u2032 = \u03be(h\u2032i) is obtained by feeding h\u2032i into the pre-trained classifier. We use a threshold \u03c3/t during testing to identify important motifs as an explanation. Algorithm 2 in the appendix describes the details of the MotifExplainer on node classification tasks. Formally, the different parts from Section 3.4 are represented as\nh = \u03a8(G)i, y = \u03be(h), (7) Gc = ComputationGraph(G, vi), (8) M = [m1,m2, . . . ,mt] = MotifExtractor(Gc). (9)\nThen, Eq. (3 - 6) are applied to compute loss for training the attention layer."
},
{
"heading": "4 EXPERIMENTAL STUDIES",
"text": "We conduct experiments to evaluate the proposed methods on both real-world and synthetic datasets."
},
{
"heading": "4.1 DATASETS AND EXPERIMENTAL SETTINGS",
"text": "We evaluate the proposed methods using different downstream tasks on seven datasets to demonstrate the effectiveness of our model. The statistic and properties of seven datasets are summarized in Appendix D. The details are introduced below.\nDatasets. MUTAG (Kazius et al., 2005; Riesen & Bunke, 2008) is a chemical compound dataset containing 4,337 molecule graphs. Each graph can be categorized into mutagen and non-mutagen.\nPTC (Kriege & Mutzel, 2012) is a collection of 344 chemical compounds reporting the carcinogenicity for rats.\nNCI1 (Wale et al., 2008) is a balanced subset of datasets of chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines respectively.\nPROTEINS (Dobson & Doig, 2003) is a protein dataset classified as enzymatic or non-enzymatic.\nIMDB-BINARY (Yanardag & Vishwanathan, 2015) is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB.\nBA-2Motifs (Luo et al., 2020) is a synthetic graph classification dataset. It contains 800 graphs, and each graph is generated from a Barabasi-Albert (BA) base graph. Half graphs are connected with house-like motifs, while the rest are assigned with five-node cycle motifs. The labels of graphs are assigned based on the associated motifs.\nBA-Shapes (Ying et al., 2019) is a synthetic node classification dataset. It contains a single base BA graph with 300 nodes. Some nodes are randomly attached with 80 five-node house structure motifs. Each node label is assigned based on its position and structure. In particular, labels of nodes in the base BA graph are assigned 0. Nodes located at the top/middle/bottom of the house-like network motifs are labeled with 1, 2, and 3, respectively. Node features are not available in the dataset.\nExperimental settings. Our experiments adopt a simple GNN model and focus on explanation results. Similar observations can be obtained when using other popular GNN models like GAT and GIN. For the pre-trained GNN, we use a 3-layer GCN as a feature extractor and a 2-layer MLP as a classifier on all datasets. The GCN model is pre-trained to achieve reasonable performances on all datasets. We use Adam optimizer for training. We set the learning rate to 0.01. We compare our MotifExplainer model with several state-of-the-art baselines: GNNExplainer, SubgraphX, PGExplainer, and ReFine. We also build a model that uses the same attention layer as MotifExplainer but assigns weights to edges instead of motifs. Noted that all methods are compared in a fair setting. During prediction, we use \u03c3 = 1 to control the size of selected motifs. Unlike other methods, we do not explicitly set a fixed number for selected edges as explanations, enabling maximum flexibility and capability when selecting important motifs.\nEvaluation metrics. A fundamental criterion for explanations is that they must be humanexplainable, which means the generated explanations should be easy to understand. Taking the BA-2Motif as an example, a graph label is determined by the house structure attached to a base BA graph. A good explanation of GNNs on this dataset should highlight the house structure. To this end, we perform qualitative analysis to evaluate the proposed method.\nEven though qualitative analysis/visualizations can provide insight into whether an explanation is reasonable for human beings, this assessment is not entirely dependable due to the lack of ground truth in real-world datasets. Thus, we employ three quantitative evaluation metrics to evaluate our explanation methods. We use the Accuracy metric to evaluate models for synthesis datasets with ground truth. Here, we use the same settings as GNNExplainer and PGExplainer. In particular, we regard edges inside ground truth motifs as positive edges and edges outside motifs as negative.\nAn explainer aims to answer a question that when a trained GNN predicts an input, which part of the input makes the greatest contribution. To this end, the explanation selected by an explainer must be unique and discriminative. Intuitively, the explanation obtained by the explainer should obtain similar prediction results as the original graph. Also, the explanation is in a reasonable size.\nThus, following (Yuan et al., 2020b), we use Fidelity and Sparsity metrics to evaluate the proposed method on real-world datasets. In particular, the Fidelity metric studies the prediction change by keeping important input features and removing unimportant features. The Sparsity metric measures the proportion of edges selected by explanation methods. Formally, they are computed by\nFidelity = 1\nN N\u2211 i=1 (\u03a8(Gi)yi \u2212\u03a8(G pi i )yi) , (10)\nSparsity = 1\nN N\u2211 i=1 ( 1\u2212 |pi| |Gi| ) , (11)\nwhere pi is an explanation for an input graph Gi. |pi| and |Gi| donate the number of edges in the explanation, and the number in the original input graph, respectively."
},
{
"heading": "4.2 QUALITATIVE RESULTS",
"text": "In this section, we visually compare the explanations of our model with those of state-of-the-art explainers. Some results are illustrated in Figure 2, with generated explanations highlighted. We report the visualization results of the MUTAG dataset in the first row. Unlike BA-Shape and BA2Motif, MUTAG is a real-world dataset and does not have ground truth for explanations. We need to leverage domain knowledge to analyze the generated explanations. In particular, carbon rings with chemical groups NH2 or NO2 tend to be mutagenic. As mentioned by PGExplainer, carbon rings appear in both mutagen and non-mutagenic graphs. Thus, the chemical groups NH2 and NO2 are more important and considered as the ground truth for explanations. From the results, our MotifExplainer can accurately identify NH2 and NO2 in a graph while other models can not. PGExplainer identifies some extra unimportant edges. SubgraphX produces subgraphs as explanations that are neither motifs nor human-understandable. Our proposed GNN explainer can consider motif information and generate better explanations on molecular graphs. Note that neither NH2 nor NO2 is explicitly included in our motif extraction rules. The explanation is generated by identifying bonds in these groups, which means that our method can be used to find motifs.\nWe show the visualization results of the BA-Shape dataset in the second row of Figure 2. In this dataset, a node\u2019s label depends on its location as described in Section 4.1. Thus, an explanation generated by an explainer for a target node should be the motif. We consider the selected edges on the motif to be positive and those not on the motif negative. From the results, our MotifExplainer can accurately mark the motif as the explanation. However, other models select a part of the motif or include extra non-motif edges. The third row of Figure 2 shows the visualization results on the BA-2Motif dataset, which is also a synthetic dataset. From Section 4.1, a graph\u2019s label is determined by the motif attached to the base graph: the five nodes house-like motif or the five nodes cycle motif. Thus, we treat all edges in these two motifs to be positive and the rest of edges to be negative. From\nthe results, we can see that our MotifExplainer can precisely identify both the house-like motif and the cycle motif in a graph without including non-motif edges. While other models select edges far from the motif. More qualitative analysis results are reported in Appendix F."
},
{
"heading": "4.3 QUANTITATIVE RESULTS",
"text": "This section shows evaluations of our methods using seven datasets. We report the Fidelity score under the same Sparsity value on five real-world dataset and accuracy on the other two synthetic datasets. More Fidelity scores on real-world dataset are shown in Appendix E. The results are summarized in Table 1. From the results, our MotifExplainer consistently outperforms previous state-of-the-art models on all seven datasets under Sparsity value equals to 0.7 . Note that our method achieves 100% accuracy on two synthetic datasets and at least 2.6% to 19.0% improvements on the real-world datasets, demonstrating our model\u2019s effectiveness.\nOur model can maintain good performances when Sparsity is high. In particular, in the case of high Sparsity, the explanation contains a very limited number of edges, which shows that our model can identify the most important structures for GNN explanations. Using motifs as basic explanation units, our model can preserve the characteristics of motifs and the connectivity of edges.\n4.4 THRESHOLD STUDIES\nOur MotifExplainer uses a threshold \u03c3 to select important motifs as explanations during inference. Since \u03c3 is an important hyper-parameter, we conduct experiments to study its impact using Sparsity and Fidelity metrics. The performances of MotifExplainer using different \u03c3 values on the MUTAG dataset are summarized in Table 2. Here, we vary the \u03c3 value from 1.0 to 2.0 to cover a reasonable range. We can observe that when the threshold is larger, the Sparsity of explanations increases, and the performances in terms of Fidelity gradually decrease. This is expected since fewer motifs selected will be selected when the threshold becomes larger. Thus, the size of explanations becomes smaller, and the Sparsity value becomes larger. Note that even when the Sparsity reaches a high value of 0.8, our model can still perform well. This shows that our model can accurately select the most important motifs as explanations, demonstrating the advantage of using motifs as GNN explanations.\n4.5 ABLATION STUDIES\nOur MotifExplainer employs an attention model to score and select the most relevant motifs to explain a given graph. To demonstrate the effectiveness of using motifs as basic explanation units, we build a new model named AttnExplainer that uses\nedges as basic explanation units and apply an attention model to select relevant edges as explanations. We compare our MotifExplainer with AttnExplainer on three datasets: BA-Shape, BA-2Motif,\nMUTAG. The results are summarized in Table 3, appendix E. From the results, our model can consistently outperform AttnExplainer. This is because motifs can better obtain structural information than edges by using motif as the basic unit for explanation.\n4.6 EFFICIENCY STUDIES"
},
{
"heading": "5 RELATED WORK",
"text": "The research on GNN explainability is mainly divided into two categories: instance-level explanation and model-level explanation. Instance-level GNN explanation can also be divided into four directions, namely gradients/features-based methods, surrogate methods, decomposition methods, and perturbation-based methods. Gradients/features-based methods use gradients or hidden feature map values as the approximations of an importance score of an input. Recently, several methods have been employed to explain GNNs like SA (Baldassarre & Azizpour, 2019), CAM (Pope et al., 2019), Grad-CAM (Pope et al., 2019). The main difference between these methods is the process of gradient back-propagation and how different hidden feature maps are combined. The basic idea of surrogate methods is using a simple and explainable surrogate model to approximate the predictions of GNNs. Several methods have been introduced recently, such as GraphLime (Huang et al., 2020) and PGM-Explainer (Vu & Thai, 2020). Decomposition methods like GNN-LRP (Schnake et al., 2020) and DEGREE (Feng et al., 2021) measure the importance of input features by decomposing original predictions into several terms. The last method is the perturbation-based method. Along this direction, GNNExplainer (Ying et al., 2019) learns soft masks for edges and node features to generate an explanation via mask optimization. PGExplainer (Luo et al., 2020) learns approximated discrete masks for edges by using domain knowledge. SubgraphX (Yuan et al., 2021) employs Monte Carlo Tree Search (MCTS) algorithm to search possible subgraphs and uses Shapley value to measure the importance of subgraphs and choose a subgraph as the explanation. ReFine (Wang et al., 2021) proposes an idea of pre-training and fine-tuning to develop an explainer and generate multi-grained explanations. Model-level explanation methods aim to find the general insights and high-level information. So far, there is only one model-level explainer: XGNN (Yuan et al., 2020a). XGNN trains a generator and generates a graph as explanation to maximize a target prediction."
},
{
"heading": "6 CONCLUSION",
"text": "This work proposes a novel model-agnostic motif-based GNN explainer to explain GNNs by identifying important motifs, which are recurrent and statistically significant patterns in graphs. Our proposed motif-based methods can provide better human-understandable explanations than methods based on nodes, edges, and regular subgraphs. Given a graph, We first extract motifs from a graph using motif extraction rules based on domain knowledge. Then, motif embedding for each motif is generated using the feature extractor from a pre-trained GNN. After that, we train an attention model to select the most relevant motifs based on attention weights and use these selected motifs as an explanation for the input graph. Experimental results show that our MotifExplainer can significantly improve explanation performances from quantitative and qualitative aspects."
},
{
"heading": "A PSEUDOCODE FOR EXPLAINING NODE CLASSIFICATION TASKS",
"text": "Algorithm 2 MotifExplainer for node classification tasks Input: a graph G, labels for all nodes in the graph Y = {y1, ..., yi, ..., yn}, a pre-trained GNN \u03a8(\u00b7), a pre-trained classifier \u03be(\u00b7), motif extraction rule R Initialization: initial a trainable weight matrix W , calculate all node embedding H = {h1, ..., hi, ..., hn} for node vi in the graph G do\nOriginal node embedding hi \u2208 H Create motif list M = {m1, ...,mj , ...,mt} based on extraction rule R For each motif mj , we keep the motif, the target node vi and the edges between them. Then we put this subgraph into the pre-trained GNN \u03a8(\u00b7) and get a new node embedding of target node vi as the motif embedding mj Obtain an output score for each motif sj = mj \u00b7W \u00b7 hi Train an attention weight for each motif \u03b1j =\nexp(sj)\u2211t k=1 exp(sk)\nAcquire an alternative graph embedding h\u2032i = \u2211t\nk=1 \u03b1kmk Output a prediction for the alternative graph embedding y\u0302i = \u03be(h\u2032i) Calculate loss based on y\u0302i and yi Update weight W using back-propagation.\nend for"
},
{
"heading": "B A GENERAL MOTIFS EXTRACTION RULE",
"text": "According to section 3.2, we can easily design motif extraction rules based on some domain knowledge. However, if we don\u2019t have relevant domain knowledge or the dataset type is unknown, we need a general way to obtain the motifs. Inspired by graph feature representation learning works on motifs (Bouritsas et al., 2022; Yu & Gao, 2022), we propose a general method to extract the simplest motifs: cycles and edges. In particular, given a graph, we first extract all cycles out of it. Then, all edges that are not inside the cycles are considered motifs. We consider combining cycles with more than two coincident nodes into a motif. Although this method cannot extract complex motifs like single-input and multi-input motifs, it can generate the most important motifs, such as ring structures in biochemical molecules and the feed-forward loop motif. By adopting this simple but general motif extraction method, we can explain a GNN model without any domain knowledge, making our explanation model more applicable. Need to be noted that, even though the motif extraction rule cannot extract single-input and multi-input motifs, these motifs can be implicitly identified by our attention layer. Experiments in the table 1 demonstrate it."
},
{
"heading": "C COMMON MOTIFS IN BIOLOGICAL AND ENGINEERING NETWORKS",
"text": "In this section, Figure 3 show some common motifs in biological and engineering networks introduced in section 3.2."
},
{
"heading": "D DATASETS AND GNN MODELS",
"text": "D.1 STATISTIC AND PROPERTIES OF DATASETS\nD.2 SETTINGS OF GNN MODELS\nReal World Datasets We employ a 3-layer GCNs to train all five real world datasets. The input feature dimension is 7 and the output dimensions of different GCN layers are set to 64, 64, 64, respectively. We employ mean-pooling as the readout function and ReLU as the activation function. The model is trained for 170 epochs with a learning rate of 0.01. We study the explanations for the graphs with correct predictions.\nBA-Shape We use a 3-layer GCNs and an MLP as a classifier to train the BA-Shape dataset. The hidden dimensions of different GCN layers are set to 64, 64, 64, respectively. We employ ReLU as the activation function. The model is trained for 300 epochs with a learning rate of 0.01. The validation accuracy of the pre-trained model can achieve 100%. We study the explanations for the whole dataset.\nBA-2Motif We use a 3-layer GCNs and an MLP as a classifier to train the BA-2Motif dataset. The hidden dimensions of different GCN layers are set to 64, 64, 64, respectively. We employ mean-pooling as the readout function and ReLU as the activation function. The model is trained for 300 epochs with a learning rate of 0.01. The validation accuracy of the pre-trained model can be 100%, which means the model can perfectly generate the distribution of the dataset. We study the explanations for the whole dataset.\nD.3 EXPERIMENT ENVIRONMENT SETTINGS\nWe conduct experiments using one Nvidia 2080Ti GPU on an AMD Ryzen 7 3800X 8-Core CPU. Our implementation environment is based on Python 3.9.7, Pytorch 1.10.1, CUDA 10.2, and Pytorch-geometric 2.0.3."
},
{
"heading": "E MORE QUANTITATIVE RESULTS",
"text": "F VISUALIZATION OF EXPLANATION\nIn this section, we report more visualization of explanation on MUTAG dataset in Figure 4. MUTAG is a real-world dataset, and it is more complex than synthetic datasets. Thus, visualization of MUTAG can better represent how different explainer works."
}
],
"year": 2022,
"abstractText": "We consider the explanation problem of Graph Neural Networks (GNNs). Most existing GNN explanation methods identify the most important edges or nodes but fail to consider substructures, which are more important for graph data. One method considering subgraphs tries to search all possible subgraphs and identifies the most significant ones. However, the subgraphs identified may not be recurrent or statistically important for interpretation. This work proposes a novel method, named MotifExplainer, to explain GNNs by identifying important motifs, which are recurrent and statistically significant patterns in graphs. Our proposed motif-based methods can provide better human-understandable explanations than methods based on nodes, edges, and regular subgraphs. Given an instance graph and a pre-trained GNN model, our method first extracts motifs in the graph using domain-specific motif extraction rules. Then, a motif embedding is encoded by feeding motifs into the pre-trained GNN. Finally, we employ an attention-based method to identify the most influential motifs as explanations for the prediction results. The empirical studies on both synthetic and real-world datasets demonstrate the effectiveness of our method.",
"creator": "LaTeX with hyperref"
},
"output": [
[
"1. The method heavily relies on the quality of motif extraction rules, whose complexity should also be discussed",
"2. Some details need to be explained as commented in my summary"
],
[
"1. \"The proposed approach requires domain-specific motifs as its inputs. This makes the approach less generalizable, and its improved accuracy as the result of adding domain-specific knowledge into the algorithm, rather than the result of technical improvements. One important goal of GNN explanation is to find new, meaningful substructures from the graph which play a key role in the prediction, which give novel insights and observations about the given graph. This work does not have such an advantage.\"",
"2. \"The technical contribution is limited. Using attention for GNN explanation is a reasonable approach, but is not novel.\"",
"3. \"This paper presents the approaches for node and graph classification in separate subsections, but they look very similar and can be presented together.\"",
"4. \"Other parts of the proposed approach, such as motif extraction and embedding, also look trivial.\"",
"5. \"Many parts of the proposed approach are not clearly presented, even though they are the core parts of technical contributions. See the questions below.\""
],
[
"1. The main weakness of the paper is in the experimental design, as some of the choices are not well defined.",
"2. First, given the importance of the motif extractor scheme for this work, the paper needs to provide a sufficient description (example visualizations) of how much the extractions based on the rules vary from the ground truth. Specifically, the line \u201cnote that neither NH2 nor NO2 is explicitly included in our motif extraction rules\u201d is useful. However, how similar/different are these motifs to the construction, like a single input example?",
"3. While the included baselines are state-of-the-art methods, the presented framework seems to have some ideological similarities to GraphLIME [1] method. Is there a reason this was not included in the comparisons?",
"4. All the methods (proposed and baselines) have hyperparameters that they can be quite sensitive to. Was sufficient hyperparameter tuning performed to select the best explanation? For example, why K=5 was chosen? If not, is that fair?",
"5. My understanding is that fidelity scores should be low (less difference in prediction change when removing unimportant features). Why are the higher scores in Table 1 (for BA datasets) highlighted in bold?"
],
[
"1. \"For a node classification task, suppose that the target node is 'far away' from the motif and is not directly connected to any node in the motif. How should we keep the connectivity between the target node and the motif?\"",
"2. \"The goal of explainability is to open the black box of deep learning models. More importantly, there was a hot debate in NLP domain on whether we should trust attention as a proxy of explanation. I assume such problem could exist in GNN as well. As such, how should we trust the explanation generated by another black-box model (attention layer here)?\"",
"3. \"Why should we use attention score rather than directly using the contribution of each motif to find important subgraphs (i.e., the loss f(y,y\u2032))?\"",
"4. \"I still think some evaluation (whether quantitative or qualitative) without using those black-box module is essential.\""
],
[
"1. Introducing domain knowledge generated motif would be great but also limited the applicability of this methods.",
"2. It also introduced certain bias in the performance evaluation as other methods do not have the domain knowledge input.",
"3. Even though this paper provided a way to generate motif. But only counting the cycle or edge, does not fit my impression of \"motif\" and do limited the power of the method.",
"4. The ablation study also illustrate the important of the domain knowledge input. For that, I think it significantly hurt the novelty of this paper.",
"5. The method could also introduce some background of the attention mechanism in neural network explanation.",
"6. This model only performed the test on vanilla GNN with three layers. I think more experiment should be conducted.",
"7. For example, different GNN architecture. Different hyperparameters like the number of layers, the width of the GNN.",
"8. I would guess the attention mechanism would have high variance if the architecture is changed."
]
],
"review_num": 5,
"item_num": [
2,
5,
5,
4,
8
]
}