Predicting molecular properties remains a difficult job with numerous possible applications, notably in drug discovery. Lately, the improvement of deep studying, combined with increasing amounts of information, has offered potent tools to build predictive models. Considering the fact that molecules might be encoded as graphs, Graph Neural Networks (GNNs) have emerged as a preferred option of architecture to tackle this process. Coaching GNNs to predict molecular properties nevertheless faces the challenge of collecting annotated information that is a expensive and time consuming procedure. Alternatively, it can be quick to access substantial databases of molecules without having annotations. In this setting, self-supervised learning can effectively leverage large amounts of non-annotated information to compensate for the lack of annotated ones. Within this function, we introduce a self-supervised framework for GNNs tailored especially for molecular house prediction. Our framework makes use of a number of pretext tasks focusing on unique scales of molecules (atoms, fragments and whole molecules). We evaluate our approach on a representative set of GNN architectures and datasets as well as think about the impact in the selection of input capabilities. Our benefits show that our framework can effectively improve overall performance in comparison to coaching from scratch, specifically in low information regimes. The improvement varies based on the dataset, model architecture and, importantly, around the decision of input function representation. 1086423-62-2 custom synthesis Triphenylbismuth uses PMID:23290930