Pretraining Molecular Graph Representation with 3D Geometry
 Rethinking SelfSupervised Learning on Structured Data
ICLR 2022
 Shengchao Liu ^{1, 2}
 Hanchen Wang^{3}
 Weiyang Liu^{3, 4}

Joan Lasenby^{3}

Hongyu Guo^{5}
 Jian Tang ^{1, 6, 7}
 ^{1}Mila
 ^{2}Université de Montréal
 ^{3}University of Cambridge
 ^{4}MPI for Intelligent Systems, Tübingen
 ^{5}National Research Council Canada
 ^{6}HEC Montréal
 ^{7}CIFAR AI Chair
Abstract
Molecular graph representation learning is a fundamental problem in modern drug and material discovery. Molecular graphs are typically modeled by their 2D topological structures, but it has been recently discovered that 3D geometric information plays a more vital role in predicting molecular functionalities. However, the lack of 3D information in realworld scenarios has significantly impeded the learning of geometric graph representation. To cope with this challenge, we propose the Graph MultiView Pretraining (GraphMVP) framework where selfsupervised learning (SSL) is performed by leveraging the correspondence and consistency between 2D topological structures and 3D geometric views. GraphMVP effectively learns a 2D molecular graph encoder that is enhanced by richer and more discriminative 3D geometry. We further provide theoretical insights to justify the effectiveness of GraphMVP. Finally, comprehensive experiments show that GraphMVP can consistently outperform existing graph SSL methods.
Method: GraphMVP
We start by aiming at maximizing the lower bound for MI: $$I(X;Y) \ge \mathcal{L}_{\text{MI}} = \frac{1}{2} \mathbb{E}_{p(x,y)} \big[ \log p(yx) + \log p(xy) \big].$$
This is essentially two conditional loglikelihood terms. Then we formulate each conditional term with: In Sec 3.2, we describe how to framing it as an energybased model (EBM).
 We propose EBMNCE, a contrastive SSL objective with noise contrastive estimation (NCE).
 Thus, we successfully connect EBM and SSL, especially with the latest contrastive learning method.
 This opens a big track, where other EBM methods (contrastive divergence, score matching) can also be applied. We leave this for future exploration.
 In Sec 3.3, we describe a variational lower bound.
 The reconstruction is hard for structured data, like molecular graph. Thus, we propose a variational representation reconstruction (VRR), a generative SSL objective.
 VRR provides another perspective of explaining the intuition behind noncontrastive SSL method (BYOL, SimSiam).