Though "deep learning" has become somewhat of a buzzword often taken out of context and vaguely referring to artificial intelligence in general, the original term refers to learning complicated concepts by a machine, by means of building them out of simpler ones in a hierarchical manner consisting of many layers. Artificial neural networks are a popular realization of such deep multi-layer hierarchies inspired by the signal processing done in the human brain.
Among the key reasons for the success of deep learning methods are important assumptions on the statistical properties of the data which are present in natural images, video, and speech. These properties are exploited by convolutional neural networks (CNNs), which allow extracting local features that are shared across the signal domain and greatly reduce the number of parameters in the network with respect to generic deep architectures, without sacrificing the capacity to extract informative patterns.
In the recent years we are experiencing the emergence of important examples of data residing on non-Euclidean geometric structures (e.g. graphs in social networks, regulatory networks in genetics, and manifolds/surfaces in computer vision). The non-Euclidean structure of such domains implies that there is no global common system of coordinates, vector space structure, or shift-invariance. As a result, basic operations such as convolution that are taken for granted in the Euclidean case, are even not well defined on non-Euclidean domains.
The goal of this project is to develop geometrically meaningful intrinsic deep learning methods that would generalize learning paradigms such as CNNs successfully working on traditional Euclidean data (signals and images) to non-Euclidean domains such as manifolds, graphs, networks, etc. and apply them to the most challenging and tough problems in different realms involving such data. The project will deal with theoretical models, computational algorithms, and applications.
ERC Consolidator Grant;