doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.02072 | 50 | Dataset pcba-aid602310 pcba-aid602313 pcba-aid602332 pcba-aid624170 pcba-aid624171 pcba-aid624173 pcba-aid624202 pcba-aid624246 pcba-aid624287 pcba-aid624288 pcba-aid624291 pcba-aid624296* pcba-aid624297* pcba-aid624417 pcba-aid651635 pcba-aid651644 pcba-aid651768 pcba-aid651965 pcba-aid652025 pcba-aid652104 pcba-aid652105 pcba-aid652106 pcba-aid686970 pcba-aid686978* pcba-aid686979* pcba-aid720504 pcba-aid720532* pcba-aid720542 Actives Inactives Target Class 310 762 70 837 1239 488 3968 101 423 1356 222 9841 6214 6388 3784 748 1677 6422 238 7126 4072 496 5949 62 746 48 816 10 170 945 733 402 026 383 076 415 773 404 440 402 621 406 224 372 045 367 273 334 388 336 077 345 619 333 378 336 050 398 731 387 779 361 115 362 320 331 953 364 365 396 566 | 1502.02072#50 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 51 | 773 404 440 402 621 406 224 372 045 367 273 334 388 336 077 345 619 333 378 336 050 398 731 387 779 361 115 362 320 331 953 364 365 396 566 324 774 368 281 358 501 354 086 368 048 353 881 14 532 363 349 path- path- path- Target Vif-A3G Vif-A3F GRP78 GLS Nrf2 PYK BRCA1 ERG Gsgsp Gsgsp a7 DNA re-replication DNA re-replication GLP-1 ATXN Vpr WRN ClpP IL-2 TDP-43 PI5P4K alpha-synuclein HT-1080-NT DT40-hTDP1 DT40-hTDP1 Plk1 PBD Marburg virus AMA1-RON2 | 1502.02072#51 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 52 | protein-protein interaction protein-protein interaction promoter other enzyme transcription fac- tor other enzyme promoter miscellaneous signalling way signalling way promoter miscellaneous miscellaneous GPCR promoter miscellaneous other enzyme protease signalling way miscellaneous other enzyme miscellaneous viability viability viability protein kinase miscellaneous protein-protein interaction ion channel ion channel miscellaneous miscellaneous other enzyme other enzyme other enzyme other enzyme protease GPCR GPCR protein kinase transcription fac- tor protein kinase other enzyme other receptor
1265 3260 1913 1508 268 661 516 290 902 306 30 30 30
342 387 338 810 304 815 324 844 364 332 363 939 364 084 364 310 388 656 405 368 14 999 15 000 14 999
pcba-aid720551* pcba-aid720553* pcba-aid720579* pcba-aid720580* pcba-aid720707 pcba-aid720708 pcba-aid720709 pcba-aid720711 pcba-aid743255 pcba-aid743266 muv-aid466 muv-aid548 muv-aid600
KCHN2 3.1 KCHN2 3.1 orthopoxvirus orthopoxvirus EPAC1 EPAC2 EPAC1 EPAC2 USP1/UAF1 PTHR1 S1P1 receptor PKA SF1 | 1502.02072#52 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 54 | Dataset muv-aid692 muv-aid712* muv-aid713* muv-aid733 muv-aid737* muv-aid810* muv-aid832 muv-aid846 muv-aid852 muv-aid858 muv-aid859 tox-NR-AhR tox-NR-AR-LBD* tox-NR-AR* tox-NR-Aromatase tox-NR-ER-LBD* tox-NR-ER* tox-NR-PPAR-gamma* tox-SR-ARE tox-SR-ATAD5 tox-SR-HSE tox-SR-MMP tox-SR-p53 dude-aa2ar dude-abl1 dude-ace dude-aces dude-ada dude-ada17 dude-adrb1 dude-adrb2 dude-akt1 dude-akt2 Actives Inactives Target Class 30 30 30 30 30 30 30 30 30 30 30 768 237 309 300 350 793 186 942 264 372 919 423 482 182 282 453 93 532 247 231 293 15 000 14 997 15 000 15 000 14 999 14 999 15 000 15 000 15 000 14 999 15 000 5780 6520 6955 5521 6604 5399 6263 4889 6807 | 1502.02072#54 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 55 | 15 000 14 997 15 000 15 000 14 999 14 999 15 000 15 000 15 000 14 999 15 000 5780 6520 6955 5521 6604 5399 6263 4889 6807 6094 4891 6351 31 546 10 749 16 899 26 240 5450 35 900 15 848 14 997 16 441 transcription fac- tor miscellaneous protein-protein interaction protein-protein interaction protein-protein interaction protein kinase protease protease protease GPCR GPCR transcription fac- tor transcription fac- tor transcription fac- tor other enzyme transcription fac- tor transcription fac- tor transcription fac- tor miscellaneous promoter miscellaneous miscellaneous miscellaneous GPCR protein kinase protease other enzyme other enzyme protease GPCR GPCR protein kinase 117 6899 protein kinase Target SF1 HSP90 ER-a-coact. bind. ER-b-coact. bind. ER-a-coact. bind. FAK Cathepsin G FXIa FXIIa D1 receptor M1 receptor Aryl hydrocarbon receptor Androgen receptor Androgen receptor Aromatase Estrogen receptor alpha Estrogen receptor alpha PPARg kinase kinase | 1502.02072#55 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 56 | ARE ATAD5 HSE mitochondrial membrane potential p53 signalling Adenosine A2a receptor Tyrosine-protein kinase ABL Angiotensin-converting enzyme Acetylcholinesterase Adenosine deaminase ADAM17 Beta-1 adrenergic receptor Beta-2 adrenergic receptor Serine/threonine-protein AKT Serine/threonine-protein AKT2 Aldose reductase Beta-lactamase Androgen Receptor
159 48 269
8999 2850 14 350
# dude-aldr dude-ampc dude-andr*
# other enzyme other enzyme transcription fac- tor other enzyme protease
122 283
6900 18 097
# dude-aofb dude-bace1
# Monoamine oxidase B Beta-secretase 1
Massively Multitask Networks for Drug Discovery | 1502.02072#56 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 57 | Dataset dude-braf dude-cah2 dude-casp3 dude-cdk2 dude-comt dude-cp2c9 dude-cp3a4 dude-csf1r dude-cxcr4 dude-def dude-dhi1 dude-dpp4 dude-drd3 dude-dyr dude-egfr dude-esr1* dude-esr2 dude-fa10 dude-fa7 dude-fabp4 dude-fak1* dude-fgfr1 dude-fkb1a dude-fnta dude-fpps dude-gcr dude-glcm* dude-gria2 dude-grik1 dude-hdac2 dude-hdac8 dude-hivint dude-hivpr Actives Inactives Target Class 152 9950 protein kinase 492 199 474 41 120 170 166 31 168 10 700 27 850 3850 7449 11 800 12 149 other enzyme protease protein kinase other enzyme other enzyme other enzyme other receptor 40 102 330 3406 5700 19 350 GPCR other enzyme other enzyme 533 480 231 542 40 943 34 037 17 192 35 047 protease GPCR other enzyme other receptor 383 | 1502.02072#57 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 58 | 19 350 GPCR other enzyme other enzyme 533 480 231 542 40 943 34 037 17 192 35 047 protease GPCR other enzyme other receptor 383 367 537 114 47 20 675 20 190 28 315 6250 2750 transcription fac- tor transcription fac- tor protease protease miscellaneous 100 139 111 592 5350 8697 5800 51 481 protein kinase other receptor other enzyme other enzyme 85 258 54 158 101 8829 14 999 3800 11 842 6549 other enzyme transcription fac- tor other enzyme ion channel ion channel 185 170 100 10 299 10 449 6650 other enzyme other enzyme other enzyme 536 35 746 protease Target Serine/threonine-protein kinase B- raf Carbonic anhydrase II Caspase-3 Cyclin-dependent kinase 2 Catechol O-methyltransferase Cytochrome P450 2C9 Cytochrome P450 3A4 Macrophage colony stimulating factor receptor C-X-C chemokine receptor type 4 Peptide deformylase 11-beta-hydroxysteroid dehydro- genase 1 Dipeptidyl peptidase IV Dopamine D3 receptor Dihydrofolate reductase Epidermal growth factor receptor erbB1 Estrogen receptor alpha Estrogen receptor beta | 1502.02072#58 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 59 | IV Dopamine D3 receptor Dihydrofolate reductase Epidermal growth factor receptor erbB1 Estrogen receptor alpha Estrogen receptor beta Coagulation factor X Coagulation factor VII Fatty binding acid adipocyte FAK Fibroblast growth factor receptor 1 FK506-binding protein 1A Protein ferase/geranylgeranyltransferase type I alpha subunit Farnesyl diphosphate synthase Glucocorticoid receptor protein farnesyltrans- receptor ionotropic | 1502.02072#59 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 60 | glucocerebrosidase Glutamate receptor ionotropic Glutamate kainate 1 Histone deacetylase 2 Histone deacetylase 8 Human immunodeï¬ciency virus type 1 integrase Human immunodeï¬ciency virus type 1 protease Human immunodeï¬ciency virus type 1 reverse transcriptase HMG-CoA reductase HSP90 Hexokinase type IV
338
18 891
# dude-hivrt
# other enzyme
# dude-hmdh dude-hs90a* dude-hxk4
170 88 92
8748 4849 4700
# other enzyme miscellaneous other enzyme
Massively Multitask Networks for Drug Discovery | 1502.02072#60 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 61 | Dataset dude-igf1r dude-inha dude-ital dude-jak2 dude-kif11 dude-kit dude-kith dude-kpcb dude-lck dude-lkha4 dude-mapk2 dude-mcr dude-met dude-mk01 dude-mk10 dude-mk14 dude-mmp13 dude-mp2k1 dude-nos1 dude-nram dude-pa2ga dude-parp1 dude-pde5a dude-pgh1 dude-pgh2 dude-plk1 dude-pnph dude-ppara dude-ppard dude-pparg* dude-prgr Actives Inactives Target Class 148 9298 other receptor 43 2300 other enzyme 138 8498 miscellaneous 107 116 166 57 135 420 171 101 6499 6849 10 449 2849 8700 27 397 9450 6150 protein kinase miscellaneous other receptor other enzyme protein kinase protein kinase protease protein kinase 94 166 79 104 578 572 121 5150 11 247 4549 6600 35 848 37 195 8149 transcription fac- tor other receptor protein kinase protein kinase protein kinase protease protein kinase 100 98 99 508 398 195 435 107 8048 | 1502.02072#61 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 62 | 35 848 37 195 8149 transcription fac- tor other receptor protein kinase protein kinase protein kinase protease protein kinase 100 98 99 508 398 195 435 107 8048 6199 5150 30 049 27 547 10 800 23 149 6800 other enzyme other enzyme other enzyme other enzyme other enzyme other enzyme other enzyme protein kinase 103 373 240 484 293 6950 19 397 12 247 25 296 15 648 Target Insulin-like growth factor I recep- tor Enoyl-[acyl-carrier-protein] reductase Leukocyte adhesion glycoprotein LFA-1 alpha Tyrosine-protein kinase JAK2 Kinesin-like protein 1 Stem cell growth factor receptor Thymidine kinase Protein kinase C beta Tyrosine-protein kinase LCK Leukotriene A4 hydrolase MAP kinase-activated protein ki- nase 2 Mineralocorticoid receptor Hepatocyte growth factor receptor MAP kinase ERK2 c-Jun N-terminal kinase 3 MAP kinase p38 alpha Matrix metalloproteinase 13 Dual speciï¬city mitogen-activated protein kinase kinase 1 Nitric-oxide synthase Neuraminidase Phospholipase A2 group IIA Poly [ADP-ribose] polymerase-1 | 1502.02072#62 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 64 | other enzyme transcription fac- tor transcription fac- tor transcription fac- tor transcription fac- tor other enzyme other enzyme other enzyme other enzyme protease protein kinase transcription fac- tor other enzyme protein kinase
# dude-ptn1 dude-pur2 dude-pygm dude-pyrd dude-reni dude-rock1 dude-rxra
130 50 77 111 104 100 131
7250 2698 3948 6450 6958 6299 6948
Protein-tyrosine phosphatase 1B GAR transformylase Muscle glycogen phosphorylase Dihydroorotate dehydrogenase Renin Rho-associated protein kinase 1 Retinoid X receptor alpha
# dude-sahh dude-src
63 524
3450 34 491
# Adenosylhomocysteinase Tyrosine-protein kinase SRC
Massively Multitask Networks for Drug Discovery | 1502.02072#64 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 65 | 63 524
3450 34 491
# Adenosylhomocysteinase Tyrosine-protein kinase SRC
Massively Multitask Networks for Drug Discovery
Dataset dude-tgfr1 dude-thb dude-thrb dude-try1 dude-tryb1 dude-tysy dude-urok dude-vgfr2 dude-wee1 dude-xiap Actives Inactives Target Class Target 133 103 461 449 148 109 162 409 102 8500 7448 26 999 25 967 7648 6748 9850 24 946 6150 other receptor transcription fac- tor protease protease protease other enzyme protease other receptor protein kinase TGF-beta receptor type I Thyroid hormone receptor beta-1 Thrombin Trypsin I Tryptase beta-1 Thymidylate synthase Urokinase-type plasminogen acti- vator Vascular endothelial growth factor receptor 2 Serine/threonine-protein WEE1 Inhibitor of apoptosis protein 3 kinase 100 5149 miscellaneous
Table A.2. Featurization failures.
Group Original Featurized Failure Rate (%) 439 879 PCBA DUD-E 1 200 966 95 916 MUV 11 764 Tox21 437 928 1 200 406 95 899 7830 0.44 0.05 0.02 33.44 | 1502.02072#65 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 66 | Massively Multitask Networks for Drug Discovery
Count [a DUD-E Mi Tox21 ME MUV Ma ~PCBA 40 30 20 : a = 0 a a | | | oo ce ce ot ZB os Oy oo es oF oi © yar gs* eo xe ee * gg aw a xo Ho gh ja? on on yo® oe ® g o fe) a? s ont 38 io Target Class
Figure A.1. Target class breakdown. Classes with fewer than ï¬ve members were merged into the âmiscellaneousâ class.
Massively Multitask Networks for Drug Discovery
Table A.3. Held-in datasets. | 1502.02072#66 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 67 | Massively Multitask Networks for Drug Discovery
Table A.3. Held-in datasets.
Dataset pcba-aid899 pcba-aid485297 pcba-aid651644 pcba-aid651768 pcba-aid743266 muv-aid466 muv-aid852 muv-aid859 tox-NR-Aromatase tox-SR-MMP Actives Inactives Target Class Target 1809 9126 748 1677 306 30 30 30 300 919 7575 311 481 361 115 362 320 405 368 14 999 15 000 15 000 5521 4891 other enzyme promoter miscellaneous Vpr other enzyme WRN GPCR GPCR protease GPCR other enzyme miscellaneous mitochondrial membrane potential CYP2C19 Rab9 PTHR1 S1P1 receptor FXIIa M1 receptor Aromatase
Table A.4. Held-out datasets. | 1502.02072#67 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 68 | Table A.4. Held-out datasets.
Dataset pcba-aid1461 pcba-aid2675 pcba-aid602233 pcba-aid624417 pcba-aid652106 muv-aid548 muv-aid832 muv-aid846 tox-NR-AhR tox-SR-ATAD5 Actives Inactives Target Class Target 2305 99 165 6388 496 30 30 30 768 264 218 561 279 333 380 904 398 731 368 281 15 000 15 000 15 000 5780 6807 NPSR GPCR MBNL1-CUG miscellaneous PGK other enzyme GLP-1 GPCR alpha-synuclein miscellaneous PKA protein kinase Cathepsin G protease protease FXIa transcription factor Aryl hydrocarbon receptor promoter ATAD5
Massively Multitask Networks for Drug Discovery
PCBA MUV Tox21 DUD-E 0.8 0.6 m > 5 a * a Sj fs} 0.4 0.2 0.0
Figure A.2. Pairwise dataset intersections. The value of the element at position (x, y) corresponds to the fraction of dataset x that is contained in dataset y. Thin black lines are used to indicate divisions between dataset groups.
Massively Multitask Networks for Drug Discovery | 1502.02072#68 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 69 | Massively Multitask Networks for Drug Discovery
2.0 1.5 + 1.0 0.5 0.0 A Log-odds-mean-AUC Duplicate Unique
Figure A.3. Multitask performance of duplicate and unique targets. Outliers are omitted for clarity. Notches indicate a conï¬dence interval around the median, computed as ±1.57 à IQR/
Massively Multitask Networks for Drug Discovery
# B. Performance metrics
Table B.1. Sign test CIs for each group of datasets. Each model is compared to the Pyramidal (2000, 100) Multitask Neural Net, .25 Dropout model.
Model PCBA (n = 128) MUV (n = 17) Tox21 (n = 12) [.3, .11] [.05, .16] [.02, .10] [.05, .15] [.09, .21] [.05, .15] [.13, .53] [.00, .18] [.13, .53] [.13, .53] [.13, .53] [.22, .64] [.00, .24] [.14, .61] [.00, .24] [.00, .24] [.14, .61] [.01, .35] | 1502.02072#69 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 70 | Logistic Regression (LR) Random Forest (RF) Single-Task Neural Net (STNN) Pyramidal (2000, 100) STNN, .25 Dropout (PSTNN) Max{LR, RF, STNN, PSTNN} 1-Hidden (1200) Layer Multitask Neural Net (MTNN)
Table B.2. Enrichment scores for all models reported in Table 2. Each value is the median across the datasets in a group of the mean k-fold enrichment values. Enrichment is an alternate measure of model performance common in virtual drug screening. We use the âROC enrichmentâ deï¬nition from (Jain & Nicholls, 2008), but roughly enrichment is the factor better than random that a modelâs top X% predictions are. | 1502.02072#70 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 71 | Model MUV 0.5% 1% 2% 5% 0.5% 1% 2% 5% 0.5% 1% 2% 5% PCBA Tox21 19.4 LR 40.0 RF 19.0 STNN 21.8 PSTNN MTNN 33.8 PMTNN 43.8 16.5 27.4 15.6 16.9 23.6 29.6 12.1 17.4 11.8 12.4 16.9 19.7 7.9 9.1 7.7 7.9 9.8 11.2 20.0 40.0 26.7 26.7 26.7 40.0 23.3 26.7 20.0 16.7 16.7 23.3 15.0 16.7 11.7 13.3 16.7 16.7 8.0 7.3 8.0 8.0 8.7 10.0 23.9 23.2 16.2 23.8 24.5 23.5 18.3 19.5 14.4 16.1 18.0 18.5 10.6 13.6 9.8 10.0 11.4 13.7 6.7 7.8 6.1 6.7 6.9 8.1
Massively Multitask Networks for Drug Discovery | 1502.02072#71 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 72 | Massively Multitask Networks for Drug Discovery
40 PCBA 0.9 0.8 0.7 0.6 + 05 8 } i : : â | 0.4 MUV ¢ 10 a) = 09 oO to) © 08 3 2s 0.7 fe) . (5 06 & $05 oO = 04 40 Tox21 0.9 O7 t zk qT 06 7 i 0.5 0.4 gw « \) eats ys N) \ @ ow x oe? oo . Rw ao we P n ® a soeâ Wenâ >».
Figure B.1. Graphical representation of data from Table 2 in the text. Notches indicate a conï¬dence interval around the median, computed as ±1.57 à IQR/ N (McGill et al., 1978). Occasionally the notch limits go beyond the quartile markers, producing a âfolded downâ effect on the boxplot. Paired t-tests (2-sided) relative to the PMTNN across all non-DUD-E datasets gave p ⤠1.86 à 10â15.
Massively Multitask Networks for Drug Discovery
# C. Training Details | 1502.02072#72 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 73 | Massively Multitask Networks for Drug Discovery
# C. Training Details
The multitask networks in Table 2 were trained with learning rate .0003 and batch size 128 for 50M steps using stochastic gradient descent. Weights were initialized from a zero-mean Gaussian with standard deviation .01. The bias was initialized at .5. We experimented with higher learning rates, but found that the pyramidal networks sometimes failed to train (the top hidden layer zeroed itself out). However, this effect vanished with the lower learning rate. Most of the models were trained with 64 simultaneous replicas sharing their gradient updates, but in some cases we used as many as 256.
The pyramidal single-task networks were trained with the same settings, but for 100K steps. The vanilla single-task networks were trained with learning rate .001 for 100K steps. The networks used in Figure 3 and Figure 4 were trained with learning rate 0.003 for 500 epochs plus a constant 3 million steps. The constant factor was introduced after we observed that the smaller multitask networks required more epochs than the larger networks to stabilize.
The networks in Figure 5 were trained with a Pyramidal (1000, 50) Single Task architecture (matching the networks in Figure 3). The weights were initialized with the weights from the networks represented in Figure 3 and then trained for 100K steps with a learning rate of 0.0003. | 1502.02072#73 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 74 | As we noted in the main text, the datasets in our collection contained many more inactive than active compounds. To ensure the actives were given adequate importance during training, we weighted the actives for each dataset to have total weight equal to the number of inactives for that dataset (inactives were given unit weight).
Table C.1 contains the results of our pyramidal model sensitivity analysis. Tables C.2 and C.3 give results for a variety of additional models not reported in Table 2.
Table C.1. Pyramid sensitivity analysis. Median 5-fold-average-AUC values are given for several variations of the pyramidal architec- ture. In an attempt to avoid the problem of training failures due to the top layer becoming all zero early in the training, the learning rate was set to 0.0001 for the ï¬rst 2M steps then to 0.0003 for 28M steps. | 1502.02072#74 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 75 | Model PCBA (n = 128) MUV (n = 17) Tox21 (n = 12) Pyramidal (1000, 50) MTNN Pyramidal (1000, 100) MTNN Pyramidal (1000, 150) MTNN Pyramidal (2000, 50) MTNN Pyramidal (2000, 100) MTNN Pyramidal (2000, 150) MTNN Pyramidal (3000, 50) MTNN Pyramidal (3000, 100) MTNN Pyramidal (3000, 150) MTNN .846 .845 .842 .846 .846 .845 .848 .844 .843 .825 .818 .812 .819 .821 .839 .801 .804 .810 .799 .796 .798 .794 .798 .792 .796 .799 .789
Massively Multitask Networks for Drug Discovery
Table C.2. Descriptions for additional models. MTNN: multitask neural net. âAuxiliary headsâ refers to the attachment of independent softmax units for each task to hidden layers (see Szegedy et al., 2014). Unless otherwise marked, assume 10M training steps. | 1502.02072#75 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 76 | A 8-Hidden (300) Layer MTNN, auxiliary heads attached to hidden layers 3 and 6, 6M steps B 1-Hidden (3000) Layer MTNN, 1M steps C 1-Hidden (3000) Layer MTNN, 1.5M steps D Pyramidal (1800, 100), 2 deep, reconnected (original input concatenated to ï¬rst pyramid output) E Pyramidal (1800, 100), 3 deep F G Pyramidal (2000, 100) MTNN, 10% connected H Pyramidal (2000, 100) MTNN, 50% connected I J K Pyramidal (2000, 100) MTNN, .25 Dropout (ï¬rst layer only), 50M steps L Pyramidal (2000, 100) MTNN, .25 Dropout, .001 learning rate
4-Hidden (1000) Layer MTNN, auxiliary heads attached to hidden layer 2, 4.5M steps
Pyramidal (2000, 100) MTNN, .001 learning rate Pyramidal (2000, 100) MTNN, 50M steps, .0003 learning rate
Table C.3. Median 5-fold-average AUC values for additional models. Sign test conï¬dence intervals and paired t-test (2-sided) p-values are relative to the PMTNN from Table 2 and were calculated across all non-DUD-E datasets. | 1502.02072#76 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 77 | Model PCBA (n = 128) MUV (n = 17) Tox21 (n = 12) Sign Test CI Paired t-Test A B C D E F G H I J K L .836 .835 .837 .842 .842 .858 .831 .856 .860 .830 .859 .872 .793 .855 .851 .842 .808 .836 .795 .827 .862 .810 .843 .837 .786 .769 .765 .816 .789 .810 .774 .796 .824 .801 .803 .802 [.01, .06] [.11, .22] [.12, .24] [.08, .18] [.02, .08] [.10, .22] [.03, .11] [.04, .13] [.07, .17] [.05, .14] [.24, .38] [.35, .50] 9.37 Ã 10â43 1.17 Ã 10â17 2.60 Ã 10â16 1.89 Ã 10â21 9.25 Ã 10â43 4.85 Ã 10â13 1.15 Ã 10â31 5.34 Ã 10â21 6.23 Ã 10â14 9.25 Ã 10â25 3.25 Ã 10â9 2.74 | 1502.02072#77 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |
1502.02072 | 79 | Massively Multitask Networks for Drug Discovery
# References
Jain, Ajay N and Nicholls, Anthony. Recommendations for evaluation of computational methods. Journal of computer- aided molecular design, 22(3-4):133â139, 2008.
McGill, Robert, Tukey, John W, and Larsen, Wayne A. Variations of box plots. The American Statistician, 32(1):12â16, 1978.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Van- houcke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. | 1502.02072#79 | Massively Multitask Networks for Drug Discovery | Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process. | http://arxiv.org/pdf/1502.02072 | Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, Vijay Pande | stat.ML, cs.LG, cs.NE | Preliminary work. Under review by the International Conference on
Machine Learning (ICML) | null | stat.ML | 20150206 | 20150206 | [] |