summaryrefslogtreecommitdiff
path: root/mutagenicity.md
blob: 3939d31901b726b35c96670c2a70f69eb56795da (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
---
title: A comparison of nine machine learning models based on an expanded mutagenicity dataset and their application for predicting pyrrolizidine alkaloid mutagenicity

author:
  - Christoph Helma:
      institute: ist
      email: helma@in-silico.ch
      correspondence: "yes"
  - Verena Schöning:
      institute: insel
  - Philipp Boss:
      institute: sysbio
  - Jürgen Drewe:
      institute: zeller

institute:
  - ist:
      name: in silico toxicology gmbh
      address: "Rastatterstrasse 41, 4057 Basel, Switzerland"
  - zeller: 
      name: Zeller AG
      address: "Seeblickstrasse 4, 8590 Romanshorn, Switzerland"
  - sysbio:
      name: Berlin Institute for Medical Systems Biology, Max Delbrück Center for Molecular Medicine in the Helmholtz Association
      address: "Robert-Rössle-Strasse 10, Berlin, 13125, Germany"
  - insel:
      name: Clinical Pharmacology and Toxicology, Department of General Internal Medicine, Bern University Hospital, University of Bern
      address: "Inselspital, 3010 Bern, Switzerland"

bibliography: bibliography.bib
keywords: mutagenicity, QSAR, lazar, random forest, support vector machine, linear regression, neural nets, deep learning, pyrrolizidine alkaloids, OpenBabel, CDK

documentclass: scrartcl
tblPrefix: Table
figPrefix: Figure
header-includes:
    - \usepackage{lineno, setspace, color, colortbl, longtable}
    - \doublespacing
    - \linenumbers
...

Abstract
========

Random forest, support vector machine, logistic regression, neural networks and
k-nearest neighbor (`lazar`) algorithms, were applied to new *Salmonella*
mutagenicity dataset with {{cv.n_uniq}} unique chemical structures.
<!--
The best prediction
accuracies in 10-fold-crossvalidation were obtained with `lazar` models and
MolPrint2D descriptors, that gave accuracies
({{cv.lazar-high-confidence.acc_perc}}%)
similar to the interlaboratory variability of the Ames test.
-->

**TODO**: PA results

Introduction
============

**TODO**: rationale for investigation

<!---
Pyrrolizidine alkaloids (PAs) are secondary plant ingredients found in
many plant species as protection against predators [Hartmann & Witte
1995](#_ENREF_59)[Langel et al. 2011](#_ENREF_76)(; ). PAs are ester
alkaloids, which are composed of a necine base (two fused five-membered
rings joined by a nitrogen atom) and one or two necic acid (carboxylic
ester arms). The necine base can have different structures and thereby
divides PAs into several structural groups, e.g. otonecine, platynecine,
and retronecine. The structural groups of the necic acid are macrocyclic
diester, open-ring diester and monoester [Langel et al.
2011](#_ENREF_76)().

PA are mainly metabolised in the liver, which is at the same time the
main target organ of toxicity [Bull & Dick 1959](#_ENREF_17)[Bull et al.
1958](#_ENREF_18)[Butler et al. 1970](#_ENREF_20)[DeLeve et al.
1996](#_ENREF_33)[Jago 1971](#_ENREF_65)[Li et al.
2011](#_ENREF_78)[Neumann et al. 2015](#_ENREF_99)(; ; ; ; ; ; ). There
are three principal metabolic pathways for 1,2-unsaturated PAs [Chen et
al. 2010](#_ENREF_26)(): (i) Detoxification by hydrolysis: the ester
bond on positions C7 and C9 are hydrolysed by non-specific esterases to
release necine base and necic acid, which are then subjected to further
phase II-conjugation and excretion. (ii) Detoxification by *N*-oxidation
of the necine base (only possible for retronecine-type PAs): the
nitrogen is oxidised to form a PA *N*-oxides, which can be conjugated by
phase II enzymes e.g. glutathione and then excreted. PA *N*-oxides can
be converted back into the corresponding parent PA [Wang et al.
2005](#_ENREF_134)(). (iii) Metabolic activation or toxification: PAs
are metabolic activated/ toxified by oxidation (for retronecine-type
PAs) or oxidative *N*-demethylation (for otonecine-type PAs [Lin
1998](#_ENREF_82)()). This pathway is mainly catalysed by cytochrome
P450 isoforms CYP2B and 3A [Ruan et al. 2014b](#_ENREF_115)(), and
results in the formation of dehydropyrrolizidines (DHP, also known as
pyrrolic ester or reactive pyrroles). DHPs are highly reactive and cause
damage in the cells where they are formed, usually hepatocytes. However,
they can also pass from the hepatocytes into the adjacent sinusoids and
damage the endothelial lining cells [Gao et al. 2015](#_ENREF_48)()
predominantly by reaction with protein, lipids and DNA. There is even
evidence, that conjugation of DHP to glutathione, which would generally
be considered a detoxification step, could result in reactive
metabolites, which might also lead to DNA adduct formation [Xia et al.
2015](#_ENREF_138)(). Due to the ability to form DNA adducts, DNA
crosslinks and DNA breaks 1,2-unsaturated PAs are generally considered
genotoxic and carcinogenic [Chen et al. 2010](#_ENREF_26)[EFSA
2011](#_ENREF_36)[Fu et al. 2004](#_ENREF_45)[Li et al.
2011](#_ENREF_78)[Takanashi et al. 1980](#_ENREF_126)[Yan et al.
2008](#_ENREF_140)[Zhao et al. 2012](#_ENREF_148)(; ; ; ; ; ; ). Still,
there is no evidence yet that PAs are carcinogenic in humans [ANZFA
2001](#_ENREF_4)[EMA 2016](#_ENREF_39)(; ). One general limitation of
studies with PAs is the number of different PAs investigated. Around 30
PAs are currently commercially available, therefore all studies focus on
these PAs. This is also true for *in vitro* and *in vivo* tests on
mutagenicity and genotoxicity. To gain a wider perspective, in this
study over 600 different PAs were assessed on their mutagenic potential
using four different machine learning techniques.
--->

<!---

Mutagenicity datasets
Algorithms
descriptors
define abbreviations
pyrrolizidine 
--->

The main objectives of this study were

  - to generate a new mutagenicity training dataset, by combining the most comprehensive public datasets
  - to compare the performance of MolPrint2D (*MP2D*) fingerprints with Chemistry Development Kit (*CDK*) descriptors
  - to compare the performance of global QSAR models (random forests (*RF*), support vector machines (*SVM*), logistic regression (*LR*), neural nets (*NN*)) with local models (`lazar`)
  - to apply these models for the prediction of pyrrolizidine alkaloid mutagenicity

Materials and Methods
=====================

Data
----

### Mutagenicity training data

An identical training dataset was used for all models. The
training dataset was compiled from the following sources:

-   Kazius/Bursi Dataset (4337 compounds, @Kazius2005): <http://cheminformatics.org/datasets/bursi/cas_4337.zip>

-   Hansen Dataset (6513 compounds, @Hansen2009): <http://doc.ml.tu-berlin.de/toxbenchmark/Mutagenicity_N6512.csv>

-   EFSA Dataset (695 compounds @EFSA2016): <https://data.europa.eu/euodp/data/storage/f/2017-0719T142131/GENOTOX%20data%20and%20dictionary.xls>

Mutagenicity classifications from Kazius and Hansen datasets were used
without further processing. To achieve consistency with these
datasets, EFSA compounds were classified as mutagenic, if at least one
positive result was found for TA98 or T100 Salmonella strains.

Dataset merges were based on unique SMILES (*Simplified Molecular Input
Line Entry Specification*) strings of the compound structures.
Duplicated experimental data with the same outcome was merged into a
single value, because it is likely that it originated from the same
experiment. Contradictory results were kept as multiple measurements in
the database. The combined training dataset contains {{cv.n_uniq}} unique
structures and {{cv.n}} individual measurements.

Source code for all data download, extraction and merge operations is publicly
available from the git repository <https://git.in-silico.ch/mutagenicity-paper>
under a GPL3 License. The new combined dataset can be found at
<https://git.in-silico.ch/mutagenicity-paper/tree/mutagenicity/mutagenicity.csv>.

### Pyrrolizidine alkaloid (PA) dataset

The pyrrolizidine alkaloid dataset was created from five independent, necine
base substructure searches in PubChem (https://pubchem.ncbi.nlm.nih.gov/) and
compared to the PAs listed in the EFSA publication @EFSA2011 and the book by
Mattocks @Mattocks1986, to ensure, that all major PAs were included. PAs
mentioned in these publications which were not found in the downloaded
substances were searched individually in PubChem and, if available, downloaded
separately.  Non-PA substances, duplicates, and isomers were removed from the
files, but artificial PAs, even if unlikely to occur in nature, were kept. The
resulting PA dataset comprised a total of {{pa.n}} different PAs.

The PAs in the dataset were classified according to structural features. A
total of 9 different structural features were assigned to the necine base,
modifications of the necine base and to the necic acid:

For the necine base, the following structural features were chosen:

  - Retronecine-type (1,2-unstaturated necine base, {{pa.groups.Retronecine.n}} compounds)
  - Otonecine-type (1,2-unstaturated necine base, {{pa.groups.Otonecine.n}} compounds)
  - Platynecine-type (1,2-saturated necine base, {{pa.groups.Platynecine.n}} compounds)

For the modifications of the necine base, the following structural features were chosen:

  - N-oxide-type ({{pa.groups.N_oxide.n}} compounds)
  - Tertiary-type (PAs which were neither from the N-oxide- nor DHP-type, {{pa.groups.Tertiary_PA.n}} compounds)
  - Dehydropyrrolizidine-type (pyrrolic ester, {{pa.groups.Dehydropyrrolizidine.n}} compounds)

For the necic acid, the following structural features were chosen:

  - Monoester-type ({{pa.groups.Monoester.n}} compounds)
  - Open-ring diester-type ({{pa.groups.Diester.n}} compounds)
  - Macrocyclic diester-type ({{pa.groups.Macrocyclic_diester.n}} compounds)

The compilation of the PA dataset is described in detail in @Schoening2017.

Descriptors
-----------

### MolPrint2D (*MP2D*) fingerprints

MolPrint2D fingerprints (@OBoyle2011a) use atom environments as molecular
representation.  They determine for each atom in a molecule, the atom types of
its connected atoms to represent their chemical environment.  This resembles
basically the chemical concept of functional groups.

In contrast to predefined lists of fragments (e.g. FP3, FP4 or MACCs
fingerprints) or descriptors (e.g CDK) they are generated dynamically from
chemical structures. This has the advantage that they can capture unknown substructures
of toxicological relevance that are not included in other descriptors. In addition they
allow the efficient calculation of 
chemical similarities (e.g. Tanimoto indices) with simple set operations.

MolPrint2D fingerprints were calculated with the OpenBabel cheminformatics
library (@OBoyle2011a). They can be obtained from the following locations:

*Training data:*

  - sparse representation (<https://git.in-silico.ch/mutagenicity-paper/tree/mutagenicity/mp2d/fingerprints.mp2d>)
  - descriptor matrix (<https://git.in-silico.ch/mutagenicity-paper/tree/mutagenicity/mp2d/mutagenicity-fingerprints.csv.gz>)

*Pyrrolizidine alkaloids:*

  - sparse representation (<https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/mp2d/fingerprints.mp2d>)
  - descriptor matrix (<https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/mp2d/pa-fingerprints.csv.gz>)

<!--
Using them as descriptors for global
models leads however to huge, sparsely populated matrices that cannot be
handled with traditional machine learning algorithms. In our experiments none
of the R and Tensorflow algorithms was capable to use them as descriptors.
-->

#### Chemistry Development Kit (*CDK*) descriptors

Molecular 1D and 2D descriptors were calculated with the PaDEL-Descriptors
program (<http://www.yapcwsoft.com> version 2.21, @Yap2011). PaDEL uses the
Chemistry Development Kit (*CDK*, <https://cdk.github.io/index.html>) library
for descriptor calculations.

As the training dataset contained {{cv.n_uniq}} instances, it was decided to
delete instances with missing values during data pre-processing. Furthermore,
substances with equivocal outcome were removed. The final training dataset
contained {{cv.cdk.n_descriptors}} descriptors for {{cv.cdk.n_compounds}}
compounds.

CDK training data can be obtained from <https://git.in-silico.ch/mutagenicity-paper/tree/mutagenicity/cdk/mutagenicity-mod-2.new.csv>.

The same procedure was applied for the pyrrolizidine dataset yielding 
 {{pa.cdk.n_descriptors}} descriptors for {{pa.cdk.n_compounds}}
compounds. CDK features for pyrrolizidine alkaloids are available at  <https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/cdk/PA-Padel-2D_m2.csv>.

<!--
During feature selection, descriptors with near zero variance were removed
using '*NearZeroVar*'-function (package 'caret'). If the percentage of the most
common value was more than 90% or when the frequency ratio of the most common
value to the second most common value was greater than 95:5 (e.g. 95 instances
of the most common value and only 5 or less instances of the second most common
value), a descriptor was classified as having a near zero variance. After that,
highly correlated descriptors were removed using the
'*findCorrelation*'-function (package 'caret') with a cut-off of 0.9. This
resulted in a training dataset with 516 descriptors. These descriptors were
scaled to be in the range between 0 and 1 using the '*preProcess*'-function
(package 'caret'). The scaling routine was saved in order to apply the same
scaling on the testing dataset. As these three steps did not consider the
dependent variable (experimental mutagenicity), it was decided that they do not
need to be included in the cross-validation of the model. To further reduce the
number of features, a LASSO (*least absolute shrinkage and selection operator*)
regression was performed using the '*glmnet*'-function (package '*glmnet*').
The reduced dataset was used for the generation of the pre-trained models.

CDK descriptors were used in global (RF, SVM, LR, NN) and local (`lazar`) models.
-->

Algorithms
----------

### `lazar`

`lazar` (*lazy structure activity relationships*) is a modular framework
for read-across model development and validation. It follows the
following basic workflow: For a given chemical structure `lazar`:

-   searches in a database for similar structures (neighbours) with
    experimental data,

-   builds a local QSAR model with these neighbours and

-   uses this model to predict the unknown activity of the query
    compound.

This procedure resembles an automated version of read across predictions
in toxicology, in machine learning terms it would be classified as a
k-nearest-neighbour algorithm.

Apart from this basic workflow, `lazar` is completely modular and allows
the researcher to use arbitrary algorithms for similarity searches and local
QSAR (*Quantitative structure--activity relationship*) modelling.
Algorithms used within this study are described in the following
sections.

#### Neighbour identification

Utilizing this modularity, similarity calculations were based both on
MolPrint2D fingerprints and on CDK descriptors.

For MolPrint2D fingerprints chemical similarity between two compounds $a$ and
$b$ is expressed as the proportion between atom environments common in both
structures $A \cap B$ and the total number of atom environments $A \cup B$
(Jaccard/Tanimoto index).

$$sim = \frac{\lvert A\  \cap B \rvert}{\lvert A\  \cup B \rvert}$$

For CDK descriptors chemical similarity between two compounds $a$ and $b$ is
expressed as the cosine similarity between the descriptor vectors $A$ for $a$
and $B$ for $b$.

$$sim = \frac{A \cdot B}{\lvert A \rvert \lvert B \rvert}$$


Threshold selection is a trade-off between prediction accuracy (high
threshold) and the number of predictable compounds (low threshold). As
it is in many practical cases desirable to make predictions even in the
absence of closely related neighbours, we follow a tiered approach:

-   First a similarity threshold of 0.5 is used to collect neighbours,
    to create a local QSAR model and to make a prediction for the query
    compound. This are predictions with *high confidence*.

-   If any of these steps fails, the procedure is repeated with a
    similarity threshold of 0.2 and the prediction is flagged with a
    warning that it might be out of the applicability domain of the
    training data (*low confidence*).

-   Similarity thresholds of 0.5 and 0.2 are the default values chosen
    by the software developers and remained unchanged during the
    course of these experiments.

Compounds with the same structure as the query structure are
automatically eliminated from neighbours to obtain unbiased predictions
in the presence of duplicates.

#### Local QSAR models and predictions

Only similar compounds (neighbours) above the threshold are used for
local QSAR models. In this investigation, we are using a weighted
majority vote from the neighbour's experimental data for mutagenicity
classifications. Probabilities for both classes (mutagenic/non-mutagenic) are
calculated according to the following formula and the class with the higher
probability is used as prediction outcome.

$$p_{c} = \ \frac{\sum_{}^{}\text{sim}_{n,c}}{\sum_{}^{}\text{sim}_{n}}$$

$p_{c}$ Probability of class c (e.g. mutagenic or non-mutagenic)\
$\sum_{}^{}\text{sim}_{n,c}$ Sum of similarities of neighbours with
class c\
$\sum_{}^{}\text{sim}_{n}$ Sum of all neighbours

#### Applicability domain

The applicability domain (AD) of `lazar` models is determined by the
structural diversity of the training data. If no similar compounds are
found in the training data no predictions will be generated. Warnings are
issued if the similarity threshold had to be lowered from 0.5 to 0.2 in order
to enable predictions. Predictions without warnings can be considered as close
to the applicability domain (*high confidence*) and predictions with warnings
as more distant from the applicability domain (*low confidence*). Quantitative
applicability domain information can be obtained from the similarities of
individual neighbours.

#### Availability

  - Source code for this manuscript (GPL3):
    <https://git.in-silico.ch/lazar/tree/?h=mutagenicity-paper>
  
  - Crossvalidation experiments (GPL3):
    <https://git.in-silico.ch/lazar/tree/models/?h=mutagenicity-paper>
  
  - Pyrrolizidine alkaloid predictions (GPL3):
    <https://git.in-silico.ch/lazar/tree/predictions/?h=mutagenicity-paper>
  
  - Public web interface:
    <https://lazar.in-silico.ch>

<!--
### R Random Forest, Support Vector Machines, and Deep Learning

The RF, SVM, and DL models were generated using the R
software (R-project for Statistical Computing,
<https://www.r-project.org/>*;* version 3.3.1), specific R packages used
are identified for each step in the description below. 

#### Random Forest (*RF*)

For the RF model, the '*randomForest*'-function (package
'*randomForest*') was used. A forest with 1000 trees with maximal
terminal nodes of 200 was grown for the prediction.

#### Support Vector Machines (*SVM*)

The '*svm*'-function (package 'e1071') with a *radial basis function
kernel* was used for the SVM model.

**TODO**: **Verena, Phillip** Sollen wir die DL Modelle ebenso wie die Tensorflow als Neural Nets (NN) bezeichnen?

#### Deep Learning

The DL model was generated using the '*h2o.deeplearning*'-function
(package '*h2o*'). The DL contained four hidden layer with 70, 50, 50,
and 10 neurons, respectively. Other hyperparameter were set as follows:
l1=1.0E-7, l2=1.0E-11, epsilon = 1.0E-10, rho = 0.8, and quantile\_alpha
= 0.5. For all other hyperparameter, the default values were used.
Weights and biases were in a first step determined with an unsupervised
DL model. These values were then used for the actual, supervised DL
model.

To validate these models, an internal cross-validation approach was
chosen. The training dataset was randomly split in training data, which
contained 95% of the data, and validation data, which contain 5% of the
data. A feature selection with LASSO on the training data was performed,
reducing the number of descriptors to approximately 100. This step was
repeated five times. Based on each of the five different training data,
the predictive models were trained and the performance tested with the
validation data. This step was repeated 10 times. 

![Flowchart of the generation and validation of the models generated in R-project](figures/image1.png){#fig:valid}

#### Applicability domain

**TODO**: **Verena**: Mit welchen Deskriptoren hast Du den Jaccard index berechnet?  Fuer den Jaccard index braucht man binaere Deskriptoren (zB MP2D), mit PaDEL Deskriptoren koennte man zB eine euklidische oder cosinus Distanz berechnen.

The AD of the training dataset and the PA dataset was evaluated using
the Jaccard distance. A Jaccard distance of '0' indicates that the
substances are similar, whereas a value of '1' shows that the substances
are different. The Jaccard distance was below 0.2 for all PAs relative
to the training dataset. Therefore, PA dataset is within the AD of the
training dataset and the models can be used to predict the genotoxic
potential of the PA dataset.

#### Availability

R scripts for these experiments can be found in https://git.in-silico.ch/mutagenicity-paper/tree/scripts/R.
-->

### Tensorflow models

**TODO**: **Philipp** Kannst Du bitte die folgenden Absaetze ergaenzen und die Vorgangsweise fuer MP2D/CDK bzw CV/PA Vorhersagen beschreiben.

<!--
Data pre-processing was done by rank transformation using the
'*QuantileTransformer*' procedure. A sequential model has been used.
Four layers have been used: input layer, two hidden layers (with 12, 8
and 8 nodes, respectively) and one output layer. For the output layer, a
sigmoidal activation function and for all other layers the ReLU
('*Rectified Linear Unit*') activation function was used. Additionally,
a L^2^-penalty of 0.001 was used for the input layer. For training of
the model, the ADAM algorithm was used to minimise the cross-entropy
loss using the default parameters of Keras. Training was performed for
100 epochs with a batch size of 64. The model was implemented with
Python 3.6 and Keras. 

**TODO**: **Philipp** Ich hab die alten Ergebnisse mit feature selection weggelassen, ist das ok? Dann muesste auch dieser Absatz gestrichen werden, oder?

Alternatively, a DL model was established with Python-based Tensorflow
program (<https://www.tensorflow.org/>) using the high-level API Keras
(<https://www.tensorflow.org/guide/keras>) to build the models. 

Tensorflow models used the same CDK descriptors as the R models.
-->

#### Random forests (*RF*)

#### Logistic regression (SGD) (*LR-sgd*)

#### Logistic regression (scikit) (*LR-scikit*)

#### Neural Nets (*NN*)

#### Support vector machines (*SVM*)

Validation
----------

10-fold cross-validation was used for all Tensorflow models.

#### Availability

Jupyter notebooks for these experiments can be found at the following locations

*Crossvalidation:*

  - MolPrint2D fingerprints: <https://git.in-silico.ch/mutagenicity-paper/tree/crossvalidations/mp2d/tensorflow>
  - CDK descriptors: <https://git.in-silico.ch/mutagenicity-paper/tree/crossvalidations/cdk/tensorflow>

*Pyrrolizidine alkaloids:*

  - MolPrint2D fingerprints: <https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/mp2d/tensorflow>
  - CDK descriptors: <https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/cdk/tensorflow>
  - CDK desc

Results
=======

10-fold crossvalidations
------------------------

Crossvalidation results are summarized in the following tables: @tbl:cv-mp2d
shows results with MolPrint2D descriptors and @tbl:cv-cdk with CDK descriptors.

|  | lazar-HC | lazar-all | RF | LR-sgd | LR-scikit | NN | SVM |
|:-|----------|-----------|----|--------|-----------|----|-----|
Accuracy | {{cv.mp2d_lazar_high_confidence.acc_perc}} | {{cv.mp2d_lazar_all.acc_perc}} | {{cv.mp2d_rf.acc_perc}} | {{cv.mp2d_lr.acc_perc}} | {{cv.mp2d_lr2.acc_perc}} | {{cv.mp2d_nn.acc_perc}} | {{cv.mp2d_svm.acc_perc}} |
True positive rate | {{cv.mp2d_lazar_high_confidence.tpr_perc}} | {{cv.mp2d_lazar_all.tpr_perc}} | {{cv.mp2d_rf.tpr_perc}} | {{cv.mp2d_lr.tpr_perc}} | {{cv.mp2d_lr2.tpr_perc}} | {{cv.mp2d_nn.tpr_perc}} | {{cv.mp2d_svm.tpr_perc}} |
True negative rate | {{cv.mp2d_lazar_high_confidence.tnr_perc}} | {{cv.mp2d_lazar_all.tnr_perc}} | {{cv.mp2d_rf.tnr_perc}} | {{cv.mp2d_lr.tnr_perc}} | {{cv.mp2d_lr2.tnr_perc}} | {{cv.mp2d_nn.tnr_perc}} | {{cv.mp2d_svm.tnr_perc}} |
Positive predictive value | {{cv.mp2d_lazar_high_confidence.ppv_perc}} | {{cv.mp2d_lazar_all.ppv_perc}} | {{cv.mp2d_rf.ppv_perc}} | {{cv.mp2d_lr.ppv_perc}} | {{cv.mp2d_lr2.ppv_perc}} | {{cv.mp2d_nn.ppv_perc}} | {{cv.mp2d_svm.ppv_perc}} |
Negative predictive value | {{cv.mp2d_lazar_high_confidence.npv_perc}} | {{cv.mp2d_lazar_all.npv_perc}} | {{cv.mp2d_rf.npv_perc}} | {{cv.mp2d_lr.npv_perc}} | {{cv.mp2d_lr2.npv_perc}} | {{cv.mp2d_nn.npv_perc}} | {{cv.mp2d_svm.npv_perc}} |
Nr. predictions | {{cv.mp2d_lazar_high_confidence.n}} | {{cv.mp2d_lazar_all.n}} | {{cv.mp2d_rf.n}} | {{cv.mp2d_lr.n}} | {{cv.mp2d_lr2.n}} | {{cv.mp2d_nn.n}} | {{cv.mp2d_svm.n}} |

: Summary of crossvalidation results with MolPrint2D descriptors (lazar-HC: lazar with high confidence, lazar-all: all lazar predictions, RF: random forests, LR-sgd: logistic regression (stochastic gradient descent), LR-scikit: logistic regression (scikit), NN: neural networks, SVM: support vector machines) {#tbl:cv-mp2d}


|  | lazar-HC | lazar-all | RF | LR-sgd | LR-scikit | NN | SVM |
|:-|----------|-----------|----|--------|-----------|----|-----|
Accuracy | {{cv.cdk_lazar_high_confidence.acc_perc}} | {{cv.cdk_lazar_all.acc_perc}} | {{cv.cdk_rf.acc_perc}} | {{cv.cdk_lr.acc_perc}} | {{cv.cdk_lr2.acc_perc}} | {{cv.cdk_nn.acc_perc}} | {{cv.cdk_svm.acc_perc}} |
True positive rate | {{cv.cdk_lazar_high_confidence.tpr_perc}} | {{cv.cdk_lazar_all.tpr_perc}} | {{cv.cdk_rf.tpr_perc}} | {{cv.cdk_lr.tpr_perc}} | {{cv.cdk_lr2.tpr_perc}} | {{cv.cdk_nn.tpr_perc}} | {{cv.cdk_svm.tpr_perc}} |
True negative rate | {{cv.cdk_lazar_high_confidence.tnr_perc}} | {{cv.cdk_lazar_all.tnr_perc}} | {{cv.cdk_rf.tnr_perc}} | {{cv.cdk_lr.tnr_perc}} | {{cv.cdk_lr2.tnr_perc}} | {{cv.cdk_nn.tnr_perc}} | {{cv.cdk_svm.tnr_perc}} |
Positive predictive value | {{cv.cdk_lazar_high_confidence.ppv_perc}} | {{cv.cdk_lazar_all.ppv_perc}} | {{cv.cdk_rf.ppv_perc}} | {{cv.cdk_lr.ppv_perc}} | {{cv.cdk_lr2.ppv_perc}} | {{cv.cdk_nn.ppv_perc}} | {{cv.cdk_svm.ppv_perc}} |
Negative predictive value | {{cv.cdk_lazar_high_confidence.npv_perc}} | {{cv.cdk_lazar_all.npv_perc}} | {{cv.cdk_rf.npv_perc}} | {{cv.cdk_lr.npv_perc}} | {{cv.cdk_lr2.npv_perc}} | {{cv.cdk_nn.npv_perc}} | {{cv.cdk_svm.npv_perc}} |
Nr. predictions | {{cv.cdk_lazar_high_confidence.n}} | {{cv.cdk_lazar_all.n}} | {{cv.cdk_rf.n}} | {{cv.cdk_lr.n}} | {{cv.cdk_lr2.n}} | {{cv.cdk_nn.n}} | {{cv.cdk_svm.n}} |

: Summary of crossvalidation results with CDK descriptors (lazar-HC: lazar with high confidence, lazar-all: all lazar predictions, RF: random forests, LR-sgd: logistic regression (stochastic gradient descent), LR-scikit: logistic regression (scikit), NN: neural networks, SVM: support vector machines) {#tbl:cv-cdk}

@fig:roc depicts the position of all crossvalidation results in receiver operating characteristic (ROC) space.

![ROC plot of crossvalidation results (lazar-HC: lazar with high confidence, lazar-all: all lazar predictions, RF: random forests, LR-sgd: logistic regression (stochastic gradient descent), LR-scikit: logistic regression (scikit), NN: neural networks, SVM: support vector machines).](figures/roc.png){#fig:roc}

Confusion matrices for all models are available from the git repository
https://git.in-silico.ch/mutagenicity-paper/tree/crossvalidations/confusion-matrices/,
individual predictions can be found in
https://git.in-silico.ch/mutagenicity-paper/tree/crossvalidations/predictions/.

With exception of lazar/CDK all investigated algorithm/descriptor combinations
give accuracies between (80 and 85%) which is equivalent to the experimental
variability of the *Salmonella typhimurium* mutagenicity bioassay (80-85%,
@Benigni1988). Sensitivities and specificities are balanced in all of
these models.

<!--
The most accurate crossvalidation predictions have been obtained with standard
`lazar` models using MolPrint2D descriptors ({{cv.lazar-high-confidence.acc}}
for predictions with high confidence, {{cv.lazar-all.acc}} for all
predictions). Models utilizing CDK descriptors have generally lower
accuracies ranging from {{cv.R-DL.acc}} (R deep learning) to {{cv.R-RF.acc}}
(R/Tensorflow random forests). Sensitivity and specificity is generally well
balanced with the exception of `lazar`-CDK (low sensitivity) and R deep
learning (low specificity) models.
-->

Pyrrolizidine alkaloid mutagenicity predictions 
-----------------------------------------------

Mutagenicity predictions from all investigated models for {{pa.n}}
pyrrolizidine alkaloids (PAs) can be downloaded from
<https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/pa-predictions.csv>.
A visual representation of all PA predictions can be found at
<https://git.in-silico.ch/mutagenicity-paper/tree/pyrrolizidine-alkaloids/pa-predictions.pdf>.

@tbl:pa-mp2d and @tbl:pa-cdk summarise the outcome of pyrrolizidine alkaloid predictions from all models with MolPrint2D and CDK descriptors.

| Model  | mutagenic | non-mutagenic | Nr. predictions |
|-------:|-----------|---------------|-----------------|
| lazar-all | {{pa.mp2d_lazar_all.mut_perc}}% ({{pa.mp2d_lazar_all.mut}}) | {{pa.mp2d_lazar_all.non_mut_perc}}% ({{pa.mp2d_lazar_all.non_mut}}) | {{pa.mp2d_lazar_all.n_perc}}% ({{pa.mp2d_lazar_all.n}}) |
| lazar-HC | {{pa.mp2d_lazar_high_confidence.mut_perc}}% ({{pa.mp2d_lazar_high_confidence.mut}}) | {{pa.mp2d_lazar_high_confidence.non_mut_perc}}% ({{pa.mp2d_lazar_high_confidence.non_mut}}) | {{pa.mp2d_lazar_high_confidence.n_perc}}% ({{pa.mp2d_lazar_high_confidence.n}}) |
| RF | {{pa.mp2d_rf.mut_perc}}% ({{pa.mp2d_rf.mut}}) | {{pa.mp2d_rf.non_mut_perc}}% ({{pa.mp2d_rf.non_mut}}) | {{pa.mp2d_rf.n_perc}}% ({{pa.mp2d_rf.n}}) |
| LR-sgd | {{pa.mp2d_lr.mut_perc}}% ({{pa.mp2d_lr.mut}}) | {{pa.mp2d_lr.non_mut_perc}}% ({{pa.mp2d_lr.non_mut}}) | {{pa.mp2d_lr.n_perc}}% ({{pa.mp2d_lr.n}}) |
| LR-scikit | {{pa.mp2d_lr2.mut_perc}}% ({{pa.mp2d_lr2.mut}}) | {{pa.mp2d_lr2.non_mut_perc}}% ({{pa.mp2d_lr2.non_mut}}) | {{pa.mp2d_lr2.n_perc}}% ({{pa.mp2d_lr2.n}}) |
| NN | {{pa.mp2d_nn.mut_perc}}% ({{pa.mp2d_nn.mut}}) | {{pa.mp2d_nn.non_mut_perc}}% ({{pa.mp2d_nn.non_mut}}) | {{pa.mp2d_nn.n_perc}}% ({{pa.mp2d_nn.n}}) |
| SVM | {{pa.mp2d_svm.mut_perc}}% ({{pa.mp2d_svm.mut}}) | {{pa.mp2d_svm.non_mut_perc}}% ({{pa.mp2d_svm.non_mut}}) | {{pa.mp2d_svm.n_perc}}% ({{pa.mp2d_svm.n}}) |

: Summary of MolPrint2D pyrrolizidine alkaloid predictions {#tbl:pa-mp2d}

| Model  | mutagenic | non-mutagenic | Nr. predictions |
|-------:|-----------|---------------|-----------------|
| lazar-all | {{pa.cdk_lazar_all.mut_perc}}% ({{pa.cdk_lazar_all.mut}}) | {{pa.cdk_lazar_all.non_mut_perc}}% ({{pa.cdk_lazar_all.non_mut}}) | {{pa.cdk_lazar_all.n_perc}}% ({{pa.cdk_lazar_all.n}}) |
| lazar-HC | {{pa.cdk_lazar_high_confidence.mut_perc}}% ({{pa.cdk_lazar_high_confidence.mut}}) | {{pa.cdk_lazar_high_confidence.non_mut_perc}}% ({{pa.cdk_lazar_high_confidence.non_mut}}) | {{pa.cdk_lazar_high_confidence.n_perc}}% ({{pa.cdk_lazar_high_confidence.n}}) |
| RF | {{pa.cdk_rf.mut_perc}}% ({{pa.cdk_rf.mut}}) | {{pa.cdk_rf.non_mut_perc}}% ({{pa.cdk_rf.non_mut}}) | {{pa.cdk_rf.n_perc}}% ({{pa.cdk_rf.n}}) |
| LR-sgd | {{pa.cdk_lr.mut_perc}}% ({{pa.cdk_lr.mut}}) | {{pa.cdk_lr.non_mut_perc}}% ({{pa.cdk_lr.non_mut}}) | {{pa.cdk_lr.n_perc}}% ({{pa.cdk_lr.n}}) |
| LR-scikit | {{pa.cdk_lr2.mut_perc}}% ({{pa.cdk_lr2.mut}}) | {{pa.cdk_lr2.non_mut_perc}}% ({{pa.cdk_lr2.non_mut}}) | {{pa.cdk_lr2.n_perc}}% ({{pa.cdk_lr2.n}}) |
| NN | {{pa.cdk_nn.mut_perc}}% ({{pa.cdk_nn.mut}}) | {{pa.cdk_nn.non_mut_perc}}% ({{pa.cdk_nn.non_mut}}) | {{pa.cdk_nn.n_perc}}% ({{pa.cdk_nn.n}}) |
| SVM | {{pa.cdk_svm.mut_perc}}% ({{pa.cdk_svm.mut}}) | {{pa.cdk_svm.non_mut_perc}}% ({{pa.cdk_svm.non_mut}}) | {{pa.cdk_svm.n_perc}}% ({{pa.cdk_svm.n}}) |

: Summary of CDK pyrrolizidine alkaloid predictions {#tbl:pa-cdk}

@fig:dhp - @fig:tert display the proportion of positive mutagenicity predictions from all models for the different pyrrolizidine alkaloid groups.

![Summary of Dehydropyrrolizidine predictions](figures/Dehydropyrrolizidine.png){#fig:dhp}

![Summary of Diester predictions](figures/Diester.png){#fig:die}

![Summary of Macrocyclic-diester predictions](figures/Macrocyclic.diester.png){#fig:mcdie}

![Summary of Monoester predictions](figures/Monoester.png){#fig:me}

![Summary of N-oxide predictions](figures/N.oxide.png){#fig:nox}

![Summary of Otonecine predictions](figures/Otonecine.png){#fig:oto}

![Summary of Platynecine predictions](figures/Platynecine.png){#fig:plat}

![Summary of Retronecine predictions](figures/Retronecine.png){#fig:ret}

![Summary of Tertiary PA predictions](figures/Tertiary.PA.png){#fig:tert}


For the visualisation of the position of pyrrolizidine alkaloids in respect to
the training data set we have applied t-distributed stochastic neighbor
embedding (t-SNE, @Maaten2008) for MolPrint2D and CDK descriptors.  t-SNE
maps each high-dimensional object (chemical) to a two-dimensional point,
maintaining the high-dimensional distances of the objects. Similar objects are
represented by nearby points and dissimilar objects are represented by distant
points.

@fig:tsne-mp2d shows the t-SNE of pyrrolizidine alkaloids (PA) and the mutagenicity training data in MP2D space (Tanimoto/Jaccard similarity).

![t-SNE visualisation of mutagenicity training data and pyrrolizidine alkaloids (PA)](figures/tsne-mp2d-mutagenicity.png){#fig:tsne-mp2d}

@fig:tsne-cdk shows the t-SNE of pyrrolizidine alkaloids (PA) and the mutagenicity training data in CDK space (Euclidean similarity).

![t-SNE visualisation of mutagenicity training data and pyrrolizidine alkaloids (PA)](figures/tsne-cdk-mutagenicity.png){#fig:tsne-cdk}

Discussion
==========

Data
----

A new training dataset for *Salmonella* mutagenicity was created from three
different sources (@Kazius2005, @Hansen2009, @EFSA2016). It contains {{cv.n_uniq}}
unique chemical structures, which is according to our knowledge the largest
public mutagenicity dataset presently available. The new training data can be
downloaded from
<https://git.in-silico.ch/mutagenicity-paper/tree/mutagenicity/mutagenicity.csv>.

Algorithms
----------

`lazar` is formally a *k-nearest-neighbor* algorithm that searches for similar
structures for a given compound and calculates the prediction based on the
experimental data for these structures. The QSAR literature calls such models
frequently *local models*, because models are generated specifically for each
query compound. The investigated tensorflow models are in contrast *global models*, i.e. a
single model is used to make predictions for all compounds. It has been
postulated in the past, that local models are more accurate, because they can
account better for mechanisms, that affect only a subset of the training data.

@tbl:cv-mp2d, @tbl:cv-cdk and @fig:roc show that all models with the exception
of lazar-CDK have similar crossvalidation accuracies that are comparable to the
experimental variability of the *Salmonella typhimurium* mutagenicity bioassay
(80-85% according to @Benigni1988). All of these models have balanced
sensitivity (true position rate) and specificity (true negative rate) and
provide highly significant concordance with experimental data (as determined by
McNemar's Test). This is a clear indication that *in-silico* predictions can be
as reliable as the bioassays. Given that the variability of experimental data
is similar to model variability it is impossible to decide which model gives
the most accurate predictions, as models with higher accuracies (e.g. NN-CDK)
might just approximate experimental errors better than more robust models.

`lazar` predictions with CDK descriptors are a notable exception, as it has a
much lower overall accuracy ({{lazar_all_cdk.acc}}) than all other models.
`lazar` uses basically a k-nearest-neighbor (with variable k) and it seems that
CDK descriptors are not very well suited for chemical similarity calculations.
We have confirmed this independently by validating k-nn models from the `R
caret` package, which give also sub-par accuracies (data not shown).

@fig:tsne-cdk is another indication that similarity calculations with CDK
descriptors are not as useful as fingerprint based similarities, because it
shows a less clearer separation between chemical classes and
mutagens/non-mutagens than @fig:tsne-mp2d.  It seems that more complex models
than simple k-nn are required to utilize CDK descriptors efficiently.

Our results do not support the assumption that local models are superior to
global models for classification purposes. For regression models (lowest
observed effect level) we have found however that local models may outperform
global models (@Helma2018) with accuracies similar to experimental variability.

<!--
@tbl:lazar, @tbl:R, @tbl:tensorflow and @fig:roc show that the standard `lazar` algorithm (with MP2D
fingerprints) give the most accurate crossvalidation results. R Random Forests,
Support Vector Machines and Tensorflow models have similar accuracies with
balanced sensitivity (true position rate) and specificity (true negative rate).
`lazar` models with CDK descriptors have low sensitivity and R Deep Learning
models have low specificity.

The accuracy of `lazar` *in-silico* predictions are comparable to the
interlaboratory variability of the Ames test (80-85% according to
@Benigni1988), especially for predictions with high confidence
({{cv.lazar-high-confidence.acc_perc}}%).

The lowest number of predictions ({{cv.lazar-padel-high-confidence.n}}) has been
obtained from `lazar`-CDK high confidence predictions, the largest number of
predictions comes from Tensorflow models ({{cv.tensorflow-rf.v3.n}}). Standard
`lazar` give a slightly lower number of predictions ({{cv.lazar-all.n}}) than R
and Tensorflow models. This is not necessarily a disadvantage, because `lazar`
abstains from predictions, if the query compound is very dissimilar from the
compounds in the training set and thus avoids to make predictions for compounds
out of the applicability domain. 
-->

Descriptors
-----------

This study uses two types of descriptors for the characterisation of chemical
structures:

*MolPrint2D* fingerprints (MP2D, @Bender2004) use atom environments (i.e.
connected atom types for all atoms in a molecule) as molecular representation,
which resembles basically the chemical concept of functional groups. MP2D
descriptors are used to determine chemical similarities in the default `lazar`
settings, and previous experiments have shown, that they give more accurate
results than predefined fingerprints (e.g.  MACCS, FP2-4).

<!--
In order to investigate, if MP2D fingerprints are also suitable for global
models we have tried to build R and Tensorflow models, both with and without
unsupervised feature selection. Unfortunately none of the algorithms was
capable to deal with the large and sparsely populated descriptor matrix.  Based
on this result we can conclude, that MolPrint2D descriptors are at the moment
unsuitable for standard global machine learning algorithms.

`lazar` does not suffer from the size and sparseness problem, because (a) it
utilizes internally a much more efficient occurrence based representation and
(b) it uses fingerprints only for similarity calculations and not as model
parameters.
-->

*Chemistry Development Kit* (CDK, @Willighagen2017) descriptors 
were calculated with the PaDEL graphical interface (@Yap2011). They include 
1D and 2D topological descriptors as well as physical-chemical properties.

With exception of `lazar` all investigated algorithms obtained models within
the experimental variability for both types of descriptors. As discussed before
CDK descriptors seem to be less suitable for chemical similarity calculations
than MolPrint2D descriptors.

Given that similar predictive accuracies are obtainable from both types of
descriptors the choice depends more on practical considerations:

MolPrint2D fragments can be calculated very efficiently for every well defined
chemical structure with OpenBabel (@OBoyle2011a). CDK descriptor calculations
are in contrast much more resource intensive and may fail for a significant
number of compounds ({{cv.cdk.n_failed}} from {{cv.n_uniq}}). 

MolPrint2D fragments are generated dynamically from chemical structures and can
be used to determine if a compound contains structural features that are absent
in training data. This feature can be used to determine applicability domains.
CDK descriptors contain in contrast a predefined set of descriptors with
unknown toxicological relevance.

MolPrint2D fingerprints can be represented very efficiently as sets of features
that are present in a given compound which makes similarity calculations very
efficient. Due to the large number of substructures present in training
compounds, they lead however to large and sparsely populated datasets, if they
have to be expanded to a binary matrix (e.g. as input for tensorflow models).
CDK descriptors contain in contrast in every case matrices with
{{cv.cdk.n_descriptors}} columns.

<!--

**TODO**: **Verena** kannst Du bitte die Deskriptoren nochmals kurz beschreiben

*CDK* descriptors were used for `lazar`, R and Tensorflow models.  All models
based on CDK descriptors had similar crossvalidation accuracies that were
significantly lower than `lazar` MolPrint2D results.  Direct comparisons are
available only for the `lazar` algorithm, and also in this case CDK
accuracies were lower than MolPrint2D accuracies.

Based on `lazar` results we can conclude, that CDK descriptors are less
suited for chemical similarity calculations than MP2D descriptors. It is also
likely that CDK descriptors lead to less accurate predictions for global
models, but we cannot draw any definitive conclusion in the absence of MP2D
models.
Our results seem to support this assumption, because standard `lazar` models
with MolPrint2D descriptors perform better than global models. The accuracy of
`lazar` models with CDK descriptors is however substantially lower and
comparable to global models with the same descriptors.

This observation may lead to the conclusion that the choice of suitable
descriptors is more important for predictive accuracy than the modelling
algorithm, but we were unable to obtain global MP2D models for direct
comparisons.  The selection of an appropriate modelling algorithm is still
crucial, because it needs the capability to handle the descriptor space.
Neighbour (and thus similarity) based algorithms like `lazar` have a clear
advantage in this respect over global machine learning algorithms (e.g. RF, SVM,
LR, NN), because Tanimoto/Jaccard similarities can be calculated efficiently
with simple set operations. 
-->

Pyrrolizidine alkaloid mutagenicity predictions
-----------------------------------------------

@fig:dhp - @fig:tert show a clear differentiation between the different
pyrrolizidine alkaloid groups. The largest proportion of mutagenic predictions
was observed for Otonecines {{pa.groups.Otonecine.mut_perc}}%
({{pa.groups.Otonecine.mut}}/{{pa.groups.Otonecine.n_pred}}), the lowest for
Monoesters {{pa.groups.Monoester.mut_perc}}%
({{pa.groups.Monoester.mut}}/{{pa.groups.Monoester.n_pred}}) and N-Oxides
{{pa.groups.N_oxide.mut_perc}}%
({{pa.groups.N_oxide.mut}}/{{pa.groups.N_oxide.n_pred}}).

Although most of the models show similar accuracies, sensitivities and
specificities in crossvalidation experiments some of the models (MPD-RF, CDK-RF
and CDK-SVM) predict a lower number of mutagens
({{pa.cdk_rf.mut_perc}}-{{pa.mp2d_rf.mut_perc}}%) than the majority of the
models ({{pa.mp2d_svm.mut_perc}}-{{pa.mp2d_lazar_high_confidence.mut_perc}}%
@tbl:pa-mp2d, @tbl:pa-cdk, @fig:dhp - @fig:tert).

From a practical point we still have to face the question, how to choose model predictions, if no experimental data is available (we found two PAs in the training data, but this number is too low, to draw any general conclusions). 

<!--
`lazar` models with MolPrint2D descriptors predicted {{pa.lazar.mp2d.all.n_perc}}%
of the pyrrolizidine alkaloids (PAs) ({{pa.lazar.mp2d.high_confidence.n_perc}}%
with high confidence), the remaining compounds are not within its applicability
domain. All other models predicted 100% of the 602 compounds, indicating that
all compounds are within their applicability domain.

Mutagenicity predictions from different models show little agreement in general
(table 4). 42 from 602 PAs have non-conflicting predictions (all of them
non-mutagenic).  Most models predict predominantly a non-mutagenic outcome for
PAs, with exception of the R deep learning (DL) and the Tensorflow Scikit
logistic regression models ({{pa.tf.dl.mut_perc}} and
{{pa.tf.lr_scikit.mut_perc}}% positive predictions). 

non-conflicting CIDs
43040
186980
187805
610955
3033169
6429355
10095536
10251171
10577975
10838897
10992912
10996028
11618501
11827237
11827238
16687858
73893122
91747608
91749688
91751314
91752877
100979630
100979631
101648301
102478913
148322
194088
21626760
91747610
91747612
91749428
91749448
102596226
6440436
4483893
5315247
46930232
67189194
91747354
91749894
101324794
118701599

R RF and SVM models favor very strongly non-mutagenic predictions (only {{pa.r.rf.mut_perc}} and {{pa.r.svm.mut_perc}} % mutagenic PAs), while Tensorflow models classify approximately half of the PAs as mutagenic (RF {{pa.tf.rf.mut_perc}}%, LR-sgd {{pa.tf.lr_sgd}}%, LR-scikit:{{pa.tf.lr_scikit.mut_perc}}, LR-NN:{{pa.tf.nn.mut_perc}}%). `lazar` models predict predominately non-mutagenicity, but to a lesser extend than R models (MP2D:{{pa.lazar.mp2d.all.mut_perc}}, CDK:{{pa.lazar.padel.all.mut_perc}}).

It is interesting to note, that different implementations of the same algorithm show little accordance in their prediction (see e.g R-RF vs. Tensorflow-RF and LR-sgd vs. LR-scikit in Table 4 and @tbl:pa-summary).

**TODO** **Verena, Philipp** habt ihr eine Erklaerung dafuer?

@fig:tsne-mp2d and @fig:tsne-padel show the t-SNE of training data and pyrrolizidine alkaloids. In @fig:tsne-mp2d the PAs are located closely together at the outer border of the training set. In @fig:tsne-padel they are less clearly separated and spread over the space occupied by the training examples.

This is probably the reason why CDK models predicted all instances and the MP2D model only {{pa.lazar.mp2d.all.n}} PAs. Predicting a large number of instances is however not the ultimate goal, we need accurate predictions and an unambiguous estimation of the applicability domain. With CDK descriptors *all* PAs are within the applicability domain of the training data, which is unlikely despite the size of the training set. MolPrint2D descriptors provide a clearer separation, which is also reflected in a better separation between high and low confidence predictions in `lazar` MP2D predictions as compared to `lazar` CDK predictions. Crossvalidation results with substantially higher accuracies for MP2D models than for CDK models also support this argument.

Differences between MP2D and CDK descriptors can be explained by their specific properties: CDK calculates a fixed set of descriptors for all structures, while MolPrint2D descriptors resemble substructures that are present in a compound. For this reason there is no fixed number of MP2D descriptors, the descriptor space are all unique substructures of the training set. If a query compound contains new substructures, this is immediately reflected in a lower similarity to training compounds, which makes applicability domain estimations very straightforward. With CDK (or any other predefined descriptors), the same set of descriptors is calculated for every compound, even if a compound comes from an completely new chemical class. 

From a practical point we still have to face the question, how to choose model predictions, if no experimental data is available (we found two PAs in the training data, but this number is too low, to draw any general conclusions). Based on crossvalidation results and the arguments in favor of MolPrint2D descriptors we would put the highest trust in `lazar` MolPrint2D predictions, especially in high-confidence predictions. `lazar` predictions have a accuracy comparable to experimental variability (@Helma2018) for compounds within the applicability domain. But they should not be trusted blindly. For practical purposes it is important to study the rationales (i.e. neighbors and their experimental activities) for each prediction of relevance. A freely accessible GUI for this purpose has been implemented at https://lazar.in-silico.ch.


**TODO**: **Verena**  Wenn Du lazar Ergebnisse konkret diskutieren willst, kann ich Dir ausfuehrliche Vorhersagen (mit aehnlichen Verbindungen und deren Aktivitaet) fuer einzelne Beispiele zusammenstellen 

Due to the low to moderate predictivity of all models, quantitative
statement on the genotoxicity of single PAs cannot be made with
sufficient confidence.

The predictions of the SVM model did not fit with the other models or
literature, and are therefore not further considered in the discussion.
-->

**TODO**: **Verena** Hier ist ein alter Text von Dir zum Recylen: 

Necic acid

The rank order of the necic acid is comparable in the four models
considered (LAZAR, RF and DL (R-project and Tensorflow). PAs from the
monoester type had the lowest genotoxic potential, followed by PAs from
the open-ring diester type. PAs with macrocyclic diesters had the
highest genotoxic potential. The result fit well with current state of
knowledge: in general, PAs, which have a macrocyclic diesters as necic
acid, are considered more toxic than those with an open-ring diester or
monoester [EFSA 2011](#_ENREF_36)[Fu et al. 2004](#_ENREF_45)[Ruan et
al. 2014b](#_ENREF_115)(; ; ).

Necine base

The rank order of necine base is comparable in LAZAR, RF, and DL
(R-project) models: with platynecine being less or as genotoxic as
retronecine, and otonecine being the most genotoxic. In the
Tensorflow-generate DL model, platynecine also has the lowest genotoxic
probability, but are then followed by the otonecines and last by
retronecine. These results partly correspond to earlier published
studies. Saturated PAs of the platynecine-type are generally accepted to
be less or non-toxic and have been shown in *in vitro* experiments to
form no DNA-adducts [Xia et al. 2013](#_ENREF_139)(). Therefore, it is
striking, that 1,2-unsaturated PAs of the retronecine-type should have
an almost comparable genotoxic potential in the LAZAR and DL (R-project)
model. In literature, otonecine-type PAs were shown to be more toxic
than those of the retronecine-type [Li et al. 2013](#_ENREF_80)().

Modifications of necine base

The group-specific results of the Tensorflow-generated DL model appear
to reflect the expected relationship between the groups: the low
genotoxic potential of *N*-oxides and the highest potential of
dehydropyrrolizidines [Chen et al. 2010](#_ENREF_26)().

In the LAZAR model, the genotoxic potential of dehydropyrrolizidines
(DHP) (using the extended AD) is comparable to that of tertiary PAs.
Since, DHP is regarded as the toxic principle in the metabolism of PAs,
and known to produce protein- and DNA-adducts [Chen et al.
2010](#_ENREF_26)(), the LAZAR model did not meet this expectation it
predicted the majority of DHP as being not genotoxic. However, the
following issues need to be considered. On the one hand, all DHP were
outside of the stricter AD of 0.5. This indicates that in general, there
might be a problem with the AD. In addition, DHP has two unsaturated
double bounds in its necine base, making it highly reactive. DHP and
other comparable molecules have a very short lifespan, and usually
cannot be used in *in vitro* experiments. This might explain the absence
of suitable neighbours in LAZAR.

Furthermore, the probabilities for this substance groups needs to be
considered, and not only the consolidated prediction. In the LAZAR
model, all DHPs had probabilities for both outcomes (genotoxic and not
genotoxic) mainly below 30%. Additionally, the probabilities for both
outcomes were close together, often within 10% of each other. The fact
that for both outcomes, the probabilities were low and close together,
indicates a lower confidence in the prediction of the model for DHPs.

In the DL (R-project) and RF model, *N*-oxides have a by far more
genotoxic potential that tertiary PAs or dehydropyrrolizidines. As PA
*N*-oxides are easily conjugated for extraction, they are generally
considered as detoxification products, which are *in vivo* quickly
renally eliminated [Chen et al. 2010](#_ENREF_26)(). On the other hand,
*N*-oxides can be also back-transformed to the corresponding tertiary PA
[Wang et al. 2005](#_ENREF_134)(). Therefore, it may be questioned,
whether *N*-oxides themselves are generally less genotoxic than the
corresponding tertiary PAs. However, in the groups of modification of
the necine base, dehydropyrrolizidine, the toxic principle of PAs,
should have had the highest genotoxic potential. Taken together, the
predictions of the modifications of the necine base from the LAZAR, RF
and R-generated DL model cannot - in contrast to the Tensorflow DL
model - be considered as reliable.

Overall, when comparing the prediction results of the PAs to current
published knowledge, it can be concluded that the performance of most
models was low to moderate. This might be contributed to the following
issues:

1.  In the LAZAR model, only 26.6% PAs were within the stricter AD. With
    the extended AD, 92.3% of the PAs could be included in the
    prediction. Even though the Jaccard distance between the training
    dataset and the PA dataset for the RF, SVM, and DL (R-project and
    Tensorflow) models was small, suggesting a high similarity, the
    LAZAR indicated that PAs have only few local neighbours, which might
    adversely affect the prediction of the mutagenic potential of PAs.

2.  All above-mentioned models were used to predict the mutagenicity of
    PAs. PAs are generally considered to be genotoxic, and the mode of
    action is also known. Therefore, the fact that some models predict
    the majority of PAs as not genotoxic seems contradictory. To
    understand this result, the basis, the training dataset, has to be
    considered. The mutagenicity of in the training dataset are based on
    data of mutagenicity in bacteria. There are some studies, which show
    mutagenicity of PAs in the AMES test [Chen et al.
    2010](#_ENREF_26)(). Also, [Rubiolo et al. (1992)](#_ENREF_116)
    examined several different PAs and several different extracts of
    PA-containing plants in the AMES test. They found that the AMES test
    was indeed able to detect mutagenicity of PAs, but in general,
    appeared to have a low sensitivity. The pre-incubation phase for
    metabolic activation of PAs by microsomal enzymes was the
    sensitivity-limiting step. This could very well mean that this is
    also reflected in the QSAR models.

Conclusions
===========

A new public *Salmonella* mutagenicity training dataset with 8309 compounds was
created and used it to train `lazar` and Tensorflow models with MolPrint2D
and CDK descriptors.

<!---
The best performance was obtained with `lazar` models
using MolPrint2D descriptors, with prediction accuracies
({{cv.lazar-high-confidence.acc_perc}}%) comparable to the interlaboratory variability
of the Ames test (80-85%). Models based on CDK descriptors had lower
accuracies than MolPrint2D models, but only the `lazar` algorithm could use
MolPrint2D descriptors.

**TODO**: PA Vorhersagen

In this study, an attempt was made to predict the genotoxic potential of
PAs using five different machine learning techniques (LAZAR, RF, SVM, DL
(R-project and Tensorflow). The results of all models fitted only partly
to the findings in literature, with best results obtained with the
Tensorflow DL model. Therefore, modelling allows statements on the
relative risks of genotoxicity of the different PA groups. Individual
predictions for selective PAs appear, however, not reliable on the
current basis of the used training dataset.

This study emphasises the importance of critical assessment of
predictions by QSAR models. This includes not only extensive literature
research to assess the plausibility of the predictions, but also a good
knowledge of the metabolism of the test substances and understanding for
possible mechanisms of toxicity.

In further studies, additional machine learning techniques or a modified
(extended) training dataset should be used for an additional attempt to
predict the genotoxic potential of PAs.
--->

References
==========