Discriminability of Single-Layer Graph Neural Networks

Now that the semester is over, I’m excited to catch up with posting about some more analytical research I worked on last summer, together with Fernando Gama and Alejandro Ribeiro. This was a fairly theoretical work analyzing the discriminability impact of nonlinearities in a graph neural network (GNN) architecture. The paper has been published in IEEE ICASSP 2021 and is available on arXiv.

This work focused on convolutional GNNs, which extend the idea of convolutional filters in images to arbitrary graph signals. In a traditional image CNN, the image can be interpreted as a signal defined over pixels, where each pixel is a node in a grid-like graph with four neighbors. Convolutional filters then perform local aggregations of each node’s neighboring signals. One branch of GNN research involves defining how convolutions should behave over arbitrary, non-grid graphs, where each node has a varying number of neighbors with no notion of “order.”

Our paper uses one particular way of defining a convolution. Namely, we let the convolutional operator be defined using a polynomial in the graph shift operator, where the polynomial coefficients are learnable weights. The graph shift operator is a matrix that respects the edge structure of the underlying graph, such as the adjacency matrix or graph Laplacian.

With this definition in place, we consider a single-layer GNN with a bank of convolutional filters followed by a nonlinearity such as tanh. We analyze this GNN’s ability to discriminate pairs of signals, with and without the inclusion of the nonlinearity. Previous work has shown that graph filters can only vary significantly in their low-frequency range if stability to perturbations in the graph support is to be respected (where frequency is related to the eigenvalues of the shift operator). We therefore say that the GNN can discriminate two signals if their difference after processing lies in the span of low-frequency eigenvectors.

Our main results then show that the addition of nonlinearities can only increase the discriminability of GNNs, characterize the space of signals that cannot be discriminated by either, and demonstrate a practical case where GNNs are strictly more discriminative with nonlinearities. We provide support for the theorems with numerical experiments.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *