Convolution Neural Networks (CNNs) are a powerful deep learning approach which has been widely applied in various fields, e.g., object recognition, image classification, and semantic segmentation. Traditionally, CNNs only deal with data that has a regular Euclidean structure, such as images, videos and text. In recent years, due to the rising trends in network analysis and prediction, generalizing CNNs to graphs has attracted considerable interest. However, since graphs are in irregular non-Euclidean domains, this brings up the challenge of how to enhance CNNs for effectively extracting useful features from arbitrary graphs.
To address this challenge, a number of works have been devoted to enhancing CNNs by developing filters over graphs. In general, there are two categories of graph filters: (a) spatial graph filters, and (b) spectral graph filters. Spatial graph filters are defined as convolutions directly on graphs, which consider neighbours that are spatially close to a current vertex. In contrast, spectral graph filters are convolutions indirectly defined on graphs, through their spectral representations.
In this talk, I will present a novel spectral convolutional neural network (CNN) model on graph structured data. This model is incorporated with a robust class of spectral graph filters, called feedback-looped filters, to provide better localization on vertices while still attaining fast convergence and memory requirements linearly in the number of edges in a graph. Theoretically, feedback-looped filters have guaranteed convergence w.r.t. a specified error bound, and can be applied universally to any graph without knowing its structure. In this model, the propagation rule further diversifies features from the preceding layers to produce strong gradient flows. We have evaluated this model using two benchmark tasks: semi-supervised document classification on citation networks and semi-supervised entity classification on a knowledge graph. The experimental results show that this model considerably outperforms the state-of-the-art methods in both benchmark tasks.
Asiri Wijesinghe is a PhD research student at the Research School of Computer Science. He is working on developing deep learning techniques on non-Euclidean domains such as generalizing deep neural models on Graphs. Asiri Wijesinghe graduated from School of Computing, University of Colombo, with a first class honours in Sri Lanka. Prior to join ANU, he has both research and industry experience in working as a Senior Data Scientist at Linear Squared and a Full Stack Developer at Sysco Labs in Sri Lanka, respectively.