Language models can learn complex molecular distributions

被引:84
作者
Flam-Shepherd, Daniel [1 ,2 ]
Zhu, Kevin [3 ]
Aspuru-Guzik, Alan [1 ,2 ,3 ,4 ]
机构
[1] Univ Toronto, Dept Comp Sci, Toronto, ON M5S 2E4, Canada
[2] Vector Inst Artificial Intelligence, Toronto, ON M5S 1M1, Canada
[3] Univ Torcnto, Dept Chem, Toronto, ON M5G 1Z8, Canada
[4] Canadian Inst Adv Res, Toronto, ON M5G 1Z8, Canada
关键词
PHYSICOCHEMICAL PARAMETERS; DESIGN; DATABASE;
D O I
10.1038/s41467-022-30839-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Generative models for the novo molecular design attract enormous interest for exploring the chemical space. Here the authors investigate the application of chemical language models to challenging modeling tasks demonstrating their capability of learning complex molecular distributions. Deep generative models of molecules have grown immensely in popularity, trained on relevant datasets, these models are used to search through chemical space. The downstream utility of generative models for the inverse design of novel functional compounds, depends on their ability to learn a training distribution of molecules. The most simple example is a language model that takes the form of a recurrent neural network and generates molecules using a string representation. Since their initial use, subsequent work has shown that language models are very capable, in particular, recent research has demonstrated their utility in the low data regime. In this work, we investigate the capacity of simple language models to learn more complex distributions of molecules. For this purpose, we introduce several challenging generative modeling tasks by compiling larger, more complex distributions of molecules and we evaluate the ability of language models on each task. The results demonstrate that language models are powerful generative models, capable of adeptly learning complex molecular distributions. Language models can accurately generate: distributions of the highest scoring penalized LogP molecules in ZINC15, multi-modal molecular distributions as well as the largest molecules in PubChem. The results highlight the limitations of some of the most popular and recent graph generative models- many of which cannot scale to these molecular distributions.
引用
收藏
页数:10
相关论文
共 68 条
[1]   Randomized SMILES strings improve the quality of molecular generative models [J].
Arus-Pous, Josep ;
Johansson, Simon Viet ;
Prykhodko, Oleksii ;
Bjerrum, Esben Jannik ;
Tyrchan, Christian ;
Reymond, Jean-Louis ;
Chen, Hongming ;
Engkvist, Ola .
JOURNAL OF CHEMINFORMATICS, 2019, 11 (01)
[2]   Natural products in drug discovery: advances and opportunities [J].
Atanasov, Atanas G. ;
Zotchev, Sergey B. ;
Dirsch, Verena M. ;
Supuran, Claudiu T. .
NATURE REVIEWS DRUG DISCOVERY, 2021, 20 (03) :200-216
[3]   Drug Analogs from Fragment-Based Long Short-Term Memory Generative Neural Networks [J].
Awale, Mahendra ;
Sirockin, Finton ;
Stiefl, Nikolaus ;
Reymond, Jean-Louis .
JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2019, 59 (04) :1347-1356
[4]   Compute Canada: Advancing Computational Research [J].
Baldwin, Susan .
HIGH PERFORMANCE COMPUTING SYMPOSIUM 2011, 2012, 341
[5]  
Bergstra J, 2012, J MACH LEARN RES, V13, P281
[6]   THE 1ST GENERAL INDEX OF MOLECULAR COMPLEXITY [J].
BERTZ, SH .
JOURNAL OF THE AMERICAN CHEMICAL SOCIETY, 1981, 103 (12) :3599-3601
[7]  
Bickerton GR, 2012, NAT CHEM, V4, P90, DOI [10.1038/NCHEM.1243, 10.1038/nchem.1243]
[8]   Antimicrobial activity of rationally designed amino terminal modified peptides [J].
Bisht, Gopal Singh ;
Rawat, Diwan S. ;
Kumar, Anil ;
Kumar, Rita ;
Pasha, Santosh .
BIOORGANIC & MEDICINAL CHEMISTRY LETTERS, 2007, 17 (15) :4343-4346
[9]   970 Million Druglike Small Molecules for Virtual Screening in the Chemical Universe Database GDB-13 [J].
Blum, Lorenz C. ;
Reymond, Jean-Louis .
JOURNAL OF THE AMERICAN CHEMICAL SOCIETY, 2009, 131 (25) :8732-+
[10]  
Bohacek RS, 1996, MED RES REV, V16, P3, DOI 10.1002/(SICI)1098-1128(199601)16:1<3::AID-MED1>3.3.CO