Systematically Comparing Neural Network Architectures in Relation Leaning

Abstract

We offer a systematic comparison of the ability of neural network architectures to learn relations from collections of objects. Relational inference is a crucial component of human reasoning from early development, and while the deep learning community has not ignored the subject, it is often integrated into performing a task, such as CLEVR. We generate simple object representations and evaluate models on their ability to learn these relations in the abstract, which allows us to focus on the relevance of their inductive biases for such reasoning. Our results highlight substantial differences between our models, suggesting there is room for further research in this domain.

Publication
Object-Oriented Learning (OOL): Perception, Representation, and Reasoning Workshop at ICML 2020