eduvedras commited on
Commit
f9392c7
·
1 Parent(s): 0a268a2
Desc_Questions.py CHANGED
@@ -75,8 +75,8 @@ class Desc_QuestionsTargz(datasets.GeneratorBasedBuilder):
75
  def _split_generators(self, dl_manager):
76
  path = dl_manager.download(_URL)
77
  image_iters = dl_manager.iter_archive(path)
78
- metadata_train_path = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/desc_questions_train_final.csv"
79
- metadata_test_path = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/desc_questions_test_final.csv"
80
 
81
  return [
82
  datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"images": image_iters,
 
75
  def _split_generators(self, dl_manager):
76
  path = dl_manager.download(_URL)
77
  image_iters = dl_manager.iter_archive(path)
78
+ metadata_train_path = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/desc_questions_dataset_train.csv"
79
+ metadata_test_path = "https://huggingface.co/datasets/eduvedras/Desc_Questions/resolve/main/desc_questions_dataset_test.csv"
80
 
81
  return [
82
  datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"images": image_iters,
desc_questions_dataset.csv CHANGED
The diff for this file is too large to render. See raw diff
 
desc_questions_dataset_test.csv CHANGED
@@ -1,16 +1,95 @@
1
  Chart;description;Questions
2
- Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['It is clear that variable Pclass is one of the five most relevant features.', 'The variable Pclass seems to be one of the four most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the first most discriminative variable regarding the class.', 'Variable Pclass is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 0.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 72.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, not B) as 0 for any k ≤ 181.']
3
- Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  Titanic_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
5
- Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.']
6
- Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.']
7
- Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.']
8
  Titanic_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
9
- Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 20%.']
10
- Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Fare or Pclass can be discarded without losing information.', 'The variable Pclass can be discarded without risking losing information.', 'Variables Age and Parch are redundant, but we can’t say the same for the pair Fare and Pclass.', 'Variables SibSp and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and Fare seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Parch might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Parch previously than variable Age.']
11
- Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Fare is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Pclass.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Parch doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
12
- Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Embarked can be seen as ordinal.', 'The variable Embarked can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Embarked variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Sex variable, dummification could have been a more adequate codification.']
13
- Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Age would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Embarked than removing all records with missing values.']
14
  Titanic_class_histogram.png;A bar chart showing the distribution of the target variable Survived.;['Balancing this dataset would be mandatory to improve the results.']
15
- Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
16
- Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as date.', 'The variable Age can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Fare and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Fare variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable SibSp seems to be promising.', 'Feature generation based on the use of variable Fare wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable SibSp than removing all records with missing values.', 'Not knowing the semantics of Parch variable, dummification could have been a more adequate codification.']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  Chart;description;Questions
2
+ smoking_drinking_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition SMK_stat_type_cd <= 1.5 and the second with the condition gamma_GTP <= 35.5.;['The variable SMK_stat_type_cd discriminates between the target values, as shown in the decision tree.', 'Variable gamma_GTP is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of False Negatives reported in the same tree is 10.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (A,B) as N for any k ≤ 3135.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as N.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], the Decision Tree presented classifies (A, not B) as Y.']
3
+ smoking_drinking_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.']
4
+ smoking_drinking_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
5
+ smoking_drinking_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.']
6
+ smoking_drinking_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.']
7
+ smoking_drinking_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.']
8
+ smoking_drinking_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
9
+ smoking_drinking_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.']
10
+ smoking_drinking_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables waistline or age can be discarded without losing information.', 'The variable hemoglobin can be discarded without risking losing information.', 'Variables BLDS and weight are redundant, but we can’t say the same for the pair waistline and LDL_chole.', 'Variables age and height are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables height and waistline seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable waistline might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable LDL_chole previously than variable SMK_stat_type_cd.']
11
+ smoking_drinking_boxplots.png;A set of boxplots of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['Variable waistline is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable tot_chole shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable tot_chole shows a high number of outlier values.', 'Variable SMK_stat_type_cd doesn’t have any outliers.', 'Variable BLDS presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
12
+ smoking_drinking_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'hear_left', 'hear_right'].;['All variables, but the class, should be dealt with as binary.', 'The variable hear_right can be seen as ordinal.', 'The variable hear_right can be seen as ordinal without losing information.', 'Considering the common semantics for hear_left and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for hear_right variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable hear_left seems to be promising.', 'Feature generation based on the use of variable hear_right wouldn’t be useful, but the use of sex seems to be promising.', 'Given the usual semantics of hear_right variable, dummification would have been a better codification.', 'It is better to drop the variable hear_left than removing all records with missing values.', 'Not knowing the semantics of hear_right variable, dummification could have been a more adequate codification.']
13
+ smoking_drinking_class_histogram.png;A bar chart showing the distribution of the target variable DRK_YN.;['Balancing this dataset would be mandatory to improve the results.']
14
+ smoking_drinking_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
15
+ smoking_drinking_histograms_numeric.png;A set of histograms of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['All variables, but the class, should be dealt with as date.', 'The variable SBP can be seen as ordinal.', 'The variable tot_chole can be seen as ordinal without losing information.', 'Variable weight is balanced.', 'It is clear that variable height shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable LDL_chole shows some outlier values.', 'Variable tot_chole doesn’t have any outliers.', 'Variable gamma_GTP presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and height variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for weight variable, dummification would be the most adequate encoding.', 'The variable hemoglobin can be coded as ordinal without losing information.', 'Feature generation based on variable waistline seems to be promising.', 'Feature generation based on the use of variable height wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of BLDS variable, dummification would have been a better codification.', 'It is better to drop the variable SMK_stat_type_cd than removing all records with missing values.', 'Not knowing the semantics of hemoglobin variable, dummification could have been a more adequate codification.']
16
+ BankNoteAuthentication_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition skewness <= 5.16 and the second with the condition curtosis <= 0.19.;['The variable curtosis discriminates between the target values, as shown in the decision tree.', 'Variable skewness is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of False Positives reported in the same tree is 10.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The recall for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 131.']
17
+ BankNoteAuthentication_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.']
18
+ BankNoteAuthentication_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.']
19
+ BankNoteAuthentication_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.']
20
+ BankNoteAuthentication_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.']
21
+ BankNoteAuthentication_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.']
22
+ BankNoteAuthentication_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
23
+ BankNoteAuthentication_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.']
24
+ BankNoteAuthentication_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['variance', 'skewness', 'curtosis', 'entropy'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables variance or curtosis can be discarded without losing information.', 'The variable skewness can be discarded without risking losing information.', 'Variables entropy and curtosis are redundant, but we can’t say the same for the pair variance and skewness.', 'Variables curtosis and entropy are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable skewness seems to be relevant for the majority of mining tasks.', 'Variables curtosis and variance seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable variance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable variance previously than variable skewness.']
25
+ BankNoteAuthentication_boxplots.png;A set of boxplots of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['Variable curtosis is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable curtosis shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable skewness shows a high number of outlier values.', 'Variable skewness doesn’t have any outliers.', 'Variable variance presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
26
+ BankNoteAuthentication_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.']
27
+ BankNoteAuthentication_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
28
+ BankNoteAuthentication_histograms_numeric.png;A set of histograms of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable skewness can be seen as ordinal.', 'The variable skewness can be seen as ordinal without losing information.', 'Variable variance is balanced.', 'It is clear that variable variance shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable variance shows a high number of outlier values.', 'Variable skewness doesn’t have any outliers.', 'Variable curtosis presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for variance and skewness variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for curtosis variable, dummification would be the most adequate encoding.', 'The variable entropy can be coded as ordinal without losing information.', 'Feature generation based on variable skewness seems to be promising.', 'Feature generation based on the use of variable curtosis wouldn’t be useful, but the use of variance seems to be promising.', 'Given the usual semantics of curtosis variable, dummification would have been a better codification.', 'It is better to drop the variable skewness than removing all records with missing values.', 'Not knowing the semantics of entropy variable, dummification could have been a more adequate codification.']
29
+ Iris_decision_tree.png;;['The variable PetalWidthCm discriminates between the target values, as shown in the decision tree.', 'Variable PetalWidthCm is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Negatives reported in the same tree is 30.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The precision for the presented tree is lower than 90%.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], the Decision Tree presented classifies (not A, not B) as Iris-versicolor.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A,B) as Iris-virginica for any k ≤ 38.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A,B) as Iris-setosa for any k ≤ 32.']
30
+ Iris_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.']
31
+ Iris_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.']
32
+ Iris_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.']
33
+ Iris_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.']
34
+ Iris_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.']
35
+ Iris_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 25%.']
36
+ Iris_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables PetalWidthCm or SepalLengthCm can be discarded without losing information.', 'The variable PetalLengthCm can be discarded without risking losing information.', 'Variables SepalLengthCm and SepalWidthCm are redundant, but we can’t say the same for the pair PetalLengthCm and PetalWidthCm.', 'Variables PetalLengthCm and SepalLengthCm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable SepalLengthCm seems to be relevant for the majority of mining tasks.', 'Variables PetalLengthCm and SepalLengthCm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PetalWidthCm might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable PetalLengthCm previously than variable SepalWidthCm.']
37
+ Iris_boxplots.png;A set of boxplots of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['Variable PetalWidthCm is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable SepalWidthCm shows some outliers, but we can’t be sure of the same for variable PetalLengthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable PetalLengthCm shows some outlier values.', 'Variable SepalLengthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
38
+ Iris_class_histogram.png;A bar chart showing the distribution of the target variable Species.;['Balancing this dataset would be mandatory to improve the results.']
39
+ Iris_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
40
+ Iris_histograms_numeric.png;A set of histograms of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['All variables, but the class, should be dealt with as date.', 'The variable PetalWidthCm can be seen as ordinal.', 'The variable SepalLengthCm can be seen as ordinal without losing information.', 'Variable PetalWidthCm is balanced.', 'It is clear that variable PetalWidthCm shows some outliers, but we can’t be sure of the same for variable SepalLengthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable SepalWidthCm shows a high number of outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalLengthCm presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PetalWidthCm and SepalLengthCm variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PetalLengthCm variable, dummification would be the most adequate encoding.', 'The variable PetalLengthCm can be coded as ordinal without losing information.', 'Feature generation based on variable SepalWidthCm seems to be promising.', 'Feature generation based on the use of variable PetalLengthCm wouldn’t be useful, but the use of SepalLengthCm seems to be promising.', 'Given the usual semantics of PetalLengthCm variable, dummification would have been a better codification.', 'It is better to drop the variable SepalWidthCm than removing all records with missing values.', 'Not knowing the semantics of SepalWidthCm variable, dummification could have been a more adequate codification.']
41
+ phone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition int_memory <= 30.5 and the second with the condition mobile_wt <= 91.5.;['The variable mobile_wt discriminates between the target values, as shown in the decision tree.', 'Variable mobile_wt is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 2 for any k ≤ 636.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], the Decision Tree presented classifies (A, not B) as 0.']
42
+ phone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.']
43
+ phone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.']
44
+ phone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.']
45
+ phone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.']
46
+ phone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.']
47
+ phone_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 11 principal components would imply an error between 10 and 25%.']
48
+ phone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['The intrinsic dimensionality of this dataset is 11.', 'One of the variables px_height or battery_power can be discarded without losing information.', 'The variable battery_power can be discarded without risking losing information.', 'Variables ram and px_width are redundant, but we can’t say the same for the pair mobile_wt and sc_h.', 'Variables px_height and sc_w are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables sc_h and fc seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable sc_h might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable px_height previously than variable px_width.']
49
+ phone_boxplots.png;A set of boxplots of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['Variable n_cores is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable talk_time shows some outliers, but we can’t be sure of the same for variable px_width.', 'Outliers seem to be a problem in the dataset.', 'Variable px_height shows some outlier values.', 'Variable sc_w doesn’t have any outliers.', 'Variable pc presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
50
+ phone_histograms_symbolic.png;A set of bar charts of the variables ['blue', 'dual_sim', 'four_g', 'three_g', 'touch_screen', 'wifi'].;['All variables, but the class, should be dealt with as date.', 'The variable four_g can be seen as ordinal.', 'The variable wifi can be seen as ordinal without losing information.', 'Considering the common semantics for touch_screen and blue variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for three_g variable, dummification would be the most adequate encoding.', 'The variable three_g can be coded as ordinal without losing information.', 'Feature generation based on variable four_g seems to be promising.', 'Feature generation based on the use of variable three_g wouldn’t be useful, but the use of blue seems to be promising.', 'Given the usual semantics of three_g variable, dummification would have been a better codification.', 'It is better to drop the variable three_g than removing all records with missing values.', 'Not knowing the semantics of four_g variable, dummification could have been a more adequate codification.']
51
+ phone_class_histogram.png;A bar chart showing the distribution of the target variable price_range.;['Balancing this dataset would be mandatory to improve the results.']
52
+ phone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
53
+ phone_histograms_numeric.png;A set of histograms of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['All variables, but the class, should be dealt with as binary.', 'The variable int_memory can be seen as ordinal.', 'The variable fc can be seen as ordinal without losing information.', 'Variable sc_h is balanced.', 'It is clear that variable sc_w shows some outliers, but we can’t be sure of the same for variable sc_h.', 'Outliers seem to be a problem in the dataset.', 'Variable pc shows a high number of outlier values.', 'Variable ram doesn’t have any outliers.', 'Variable fc presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for px_height and battery_power variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for px_height variable, dummification would be the most adequate encoding.', 'The variable battery_power can be coded as ordinal without losing information.', 'Feature generation based on variable mobile_wt seems to be promising.', 'Feature generation based on the use of variable sc_h wouldn’t be useful, but the use of battery_power seems to be promising.', 'Given the usual semantics of mobile_wt variable, dummification would have been a better codification.', 'It is better to drop the variable mobile_wt than removing all records with missing values.', 'Not knowing the semantics of talk_time variable, dummification could have been a more adequate codification.']
54
+ Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['The variable Parch discriminates between the target values, as shown in the decision tree.', 'Variable Parch is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 181.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 1.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.']
55
+ Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.']
56
  Titanic_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
57
+ Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.']
58
+ Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.']
59
+ Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.']
60
  Titanic_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
61
+ Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.']
62
+ Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables SibSp or Parch can be discarded without losing information.', 'The variable Parch can be discarded without risking losing information.', 'Variables Fare and Age seem to be useful for classification tasks.', 'Variables Age and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Pclass seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable SibSp might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Fare previously than variable Age.']
63
+ Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Pclass shows some outliers, but we can’t be sure of the same for variable Fare.', 'Outliers seem to be a problem in the dataset.', 'Variable Fare shows a high number of outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
64
+ Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Sex can be seen as ordinal.', 'The variable Sex can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Sex variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Embarked variable, dummification could have been a more adequate codification.']
65
+ Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Embarked would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Age than removing all records with missing values.']
66
  Titanic_class_histogram.png;A bar chart showing the distribution of the target variable Survived.;['Balancing this dataset would be mandatory to improve the results.']
67
+ Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
68
+ Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Parch can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Pclass is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable SibSp.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for SibSp variable, dummification would be the most adequate encoding.', 'The variable Pclass can be coded as ordinal without losing information.', 'Feature generation based on variable Parch seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable Pclass than removing all records with missing values.', 'Not knowing the semantics of SibSp variable, dummification could have been a more adequate codification.']
69
+ apple_quality_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Juiciness <= -0.3 and the second with the condition Crunchiness <= 2.25.;['The variable Crunchiness discriminates between the target values, as shown in the decision tree.', 'Variable Juiciness is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The specificity for the presented tree is higher than 90%.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], the Decision Tree presented classifies (not A, not B) as bad.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 1625.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as good.']
70
+ apple_quality_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.']
71
+ apple_quality_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
72
+ apple_quality_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.']
73
+ apple_quality_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.']
74
+ apple_quality_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.']
75
+ apple_quality_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
76
+ apple_quality_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.']
77
+ apple_quality_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Crunchiness or Acidity can be discarded without losing information.', 'The variable Ripeness can be discarded without risking losing information.', 'Variables Juiciness and Crunchiness are redundant, but we can’t say the same for the pair Sweetness and Ripeness.', 'Variables Juiciness and Crunchiness are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Juiciness seems to be relevant for the majority of mining tasks.', 'Variables Crunchiness and Weight seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Juiciness might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Juiciness previously than variable Ripeness.']
78
+ apple_quality_boxplots.png;A set of boxplots of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['Variable Weight is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Sweetness shows some outliers, but we can’t be sure of the same for variable Crunchiness.', 'Outliers seem to be a problem in the dataset.', 'Variable Ripeness shows a high number of outlier values.', 'Variable Acidity doesn’t have any outliers.', 'Variable Juiciness presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
79
+ apple_quality_class_histogram.png;A bar chart showing the distribution of the target variable Quality.;['Balancing this dataset would be mandatory to improve the results.']
80
+ apple_quality_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
81
+ apple_quality_histograms_numeric.png;A set of histograms of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Acidity can be seen as ordinal.', 'The variable Size can be seen as ordinal without losing information.', 'Variable Juiciness is balanced.', 'It is clear that variable Weight shows some outliers, but we can’t be sure of the same for variable Sweetness.', 'Outliers seem to be a problem in the dataset.', 'Variable Juiciness shows a high number of outlier values.', 'Variable Size doesn’t have any outliers.', 'Variable Weight presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Crunchiness and Size variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sweetness variable, dummification would be the most adequate encoding.', 'The variable Juiciness can be coded as ordinal without losing information.', 'Feature generation based on variable Acidity seems to be promising.', 'Feature generation based on the use of variable Acidity wouldn’t be useful, but the use of Size seems to be promising.', 'Given the usual semantics of Acidity variable, dummification would have been a better codification.', 'It is better to drop the variable Ripeness than removing all records with missing values.', 'Not knowing the semantics of Acidity variable, dummification could have been a more adequate codification.']
82
+ Employee_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition JoiningYear <= 2017.5 and the second with the condition ExperienceInCurrentDomain <= 3.5.;['The variable JoiningYear discriminates between the target values, as shown in the decision tree.', 'Variable ExperienceInCurrentDomain is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 44.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], the Decision Tree presented classifies (A,B) as 0.']
83
+ Employee_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.']
84
+ Employee_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.']
85
+ Employee_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.']
86
+ Employee_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.']
87
+ Employee_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.']
88
+ Employee_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
89
+ Employee_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 25%.']
90
+ Employee_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables PaymentTier or JoiningYear can be discarded without losing information.', 'The variable JoiningYear can be discarded without risking losing information.', 'Variables Age and PaymentTier are redundant, but we can’t say the same for the pair ExperienceInCurrentDomain and JoiningYear.', 'Variables PaymentTier and JoiningYear are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable JoiningYear seems to be relevant for the majority of mining tasks.', 'Variables Age and ExperienceInCurrentDomain seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PaymentTier might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ExperienceInCurrentDomain previously than variable PaymentTier.']
91
+ Employee_boxplots.png;A set of boxplots of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['Variable ExperienceInCurrentDomain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable PaymentTier shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows a high number of outlier values.', 'Variable JoiningYear doesn’t have any outliers.', 'Variable PaymentTier presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
92
+ Employee_histograms_symbolic.png;A set of bar charts of the variables ['Education', 'City', 'Gender', 'EverBenched'].;['All variables, but the class, should be dealt with as date.', 'The variable Gender can be seen as ordinal.', 'The variable EverBenched can be seen as ordinal without losing information.', 'Considering the common semantics for Education and City variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for City variable, dummification would be the most adequate encoding.', 'The variable City can be coded as ordinal without losing information.', 'Feature generation based on variable City seems to be promising.', 'Feature generation based on the use of variable EverBenched wouldn’t be useful, but the use of Education seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable EverBenched than removing all records with missing values.', 'Not knowing the semantics of Education variable, dummification could have been a more adequate codification.']
93
+ Employee_class_histogram.png;A bar chart showing the distribution of the target variable LeaveOrNot.;['Balancing this dataset would be mandatory to improve the results.']
94
+ Employee_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
95
+ Employee_histograms_numeric.png;A set of histograms of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['All variables, but the class, should be dealt with as date.', 'The variable PaymentTier can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable PaymentTier shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows some outlier values.', 'Variable ExperienceInCurrentDomain doesn’t have any outliers.', 'Variable PaymentTier presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for JoiningYear and PaymentTier variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PaymentTier variable, dummification would be the most adequate encoding.', 'The variable PaymentTier can be coded as ordinal without losing information.', 'Feature generation based on variable PaymentTier seems to be promising.', 'Feature generation based on the use of variable ExperienceInCurrentDomain wouldn’t be useful, but the use of JoiningYear seems to be promising.', 'Given the usual semantics of PaymentTier variable, dummification would have been a better codification.', 'It is better to drop the variable JoiningYear than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.']
desc_questions_dataset_train.csv CHANGED
The diff for this file is too large to render. See raw diff
 
desc_questions_test_final.csv DELETED
@@ -1,95 +0,0 @@
1
- Chart;description;Questions
2
- smoking_drinking_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition SMK_stat_type_cd <= 1.5 and the second with the condition gamma_GTP <= 35.5.;['It is clear that variable weight is one of the four most relevant features.', 'The variable triglyceride seems to be one of the five most relevant features.', 'The variable age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that gamma_GTP is the first most discriminative variable regarding the class.', 'Variable height is one of the most relevant variables.', 'Variable SMK_stat_type_cd seems to be relevant for the majority of mining tasks.', 'Variables LDL_chole and hemoglobin seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is lower than 75%.', 'The number of True Positives is higher than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The variable SBP seems to be one of the five most relevant features.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (not A, B) as N for any k ≤ 3135.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as Y.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (not A, B) as Y for any k ≤ 2793.']
3
- smoking_drinking_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.']
4
- smoking_drinking_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
5
- smoking_drinking_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.']
6
- smoking_drinking_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.']
7
- smoking_drinking_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.']
8
- smoking_drinking_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
9
- smoking_drinking_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.']
10
- smoking_drinking_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables SMK_stat_type_cd or SBP can be discarded without losing information.', 'The variable SBP can be discarded without risking losing information.', 'Variables waistline and height are redundant, but we can’t say the same for the pair triglyceride and SMK_stat_type_cd.', 'Variables waistline and LDL_chole are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable weight seems to be relevant for the majority of mining tasks.', 'Variables tot_chole and triglyceride seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable waistline might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable height previously than variable weight.']
11
- smoking_drinking_boxplots.png;A set of boxplots of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['Variable tot_chole is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable tot_chole shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable weight shows a high number of outlier values.', 'Variable LDL_chole doesn’t have any outliers.', 'Variable gamma_GTP presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
12
- smoking_drinking_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'hear_left', 'hear_right'].;['All variables, but the class, should be dealt with as numeric.', 'The variable sex can be seen as ordinal.', 'The variable hear_left can be seen as ordinal without losing information.', 'Considering the common semantics for hear_right and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for hear_left variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable hear_right seems to be promising.', 'Feature generation based on the use of variable sex wouldn’t be useful, but the use of hear_left seems to be promising.', 'Given the usual semantics of sex variable, dummification would have been a better codification.', 'It is better to drop the variable sex than removing all records with missing values.', 'Not knowing the semantics of hear_right variable, dummification could have been a more adequate codification.']
13
- smoking_drinking_class_histogram.png;A bar chart showing the distribution of the target variable DRK_YN.;['Balancing this dataset would be mandatory to improve the results.']
14
- smoking_drinking_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
15
- smoking_drinking_histograms_numeric.png;A set of histograms of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable SBP can be seen as ordinal.', 'The variable SMK_stat_type_cd can be seen as ordinal without losing information.', 'Variable gamma_GTP is balanced.', 'It is clear that variable SBP shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable SMK_stat_type_cd shows a high number of outlier values.', 'Variable gamma_GTP doesn’t have any outliers.', 'Variable age presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and height variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for waistline variable, dummification would be the most adequate encoding.', 'The variable BLDS can be coded as ordinal without losing information.', 'Feature generation based on variable LDL_chole seems to be promising.', 'Feature generation based on the use of variable SMK_stat_type_cd wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of age variable, dummification would have been a better codification.', 'It is better to drop the variable SMK_stat_type_cd than removing all records with missing values.', 'Not knowing the semantics of gamma_GTP variable, dummification could have been a more adequate codification.']
16
- BankNoteAuthentication_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition skewness <= 5.16 and the second with the condition curtosis <= 0.19.;['It is clear that variable entropy is one of the four most relevant features.', 'The variable entropy seems to be one of the two most relevant features.', 'The variable variance discriminates between the target values, as shown in the decision tree.', 'It is possible to state that entropy is the second most discriminative variable regarding the class.', 'Variable entropy is one of the most relevant variables.', 'Variable variance seems to be relevant for the majority of mining tasks.', 'Variables entropy and curtosis seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A, not B) as 1 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 179.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], the Decision Tree presented classifies (not A, B) as 0.']
17
- BankNoteAuthentication_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.']
18
- BankNoteAuthentication_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.']
19
- BankNoteAuthentication_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.']
20
- BankNoteAuthentication_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.']
21
- BankNoteAuthentication_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.']
22
- BankNoteAuthentication_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
23
- BankNoteAuthentication_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 25%.']
24
- BankNoteAuthentication_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['variance', 'skewness', 'curtosis', 'entropy'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables variance or curtosis can be discarded without losing information.', 'The variable entropy can be discarded without risking losing information.', 'Variables entropy and variance are redundant, but we can’t say the same for the pair skewness and curtosis.', 'Variables variance and curtosis are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable curtosis seems to be relevant for the majority of mining tasks.', 'Variables entropy and skewness seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable variance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable variance previously than variable entropy.']
25
- BankNoteAuthentication_boxplots.png;A set of boxplots of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['Variable curtosis is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable curtosis shows some outliers, but we can’t be sure of the same for variable skewness.', 'Outliers seem to be a problem in the dataset.', 'Variable entropy shows a high number of outlier values.', 'Variable curtosis doesn’t have any outliers.', 'Variable variance presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
26
- BankNoteAuthentication_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.']
27
- BankNoteAuthentication_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
28
- BankNoteAuthentication_histograms_numeric.png;A set of histograms of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable skewness can be seen as ordinal.', 'The variable curtosis can be seen as ordinal without losing information.', 'Variable skewness is balanced.', 'It is clear that variable variance shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable variance shows some outlier values.', 'Variable curtosis doesn’t have any outliers.', 'Variable skewness presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for skewness and variance variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for curtosis variable, dummification would be the most adequate encoding.', 'The variable skewness can be coded as ordinal without losing information.', 'Feature generation based on variable entropy seems to be promising.', 'Feature generation based on the use of variable curtosis wouldn’t be useful, but the use of variance seems to be promising.', 'Given the usual semantics of variance variable, dummification would have been a better codification.', 'It is better to drop the variable curtosis than removing all records with missing values.', 'Not knowing the semantics of variance variable, dummification could have been a more adequate codification.']
29
- Iris_decision_tree.png;;['It is clear that variable SepalLengthCm is one of the four most relevant features.', 'The variable SepalWidthCm seems to be one of the four most relevant features.', 'The variable SepalLengthCm discriminates between the target values, as shown in the decision tree.', 'It is possible to state that SepalLengthCm is the first most discriminative variable regarding the class.', 'Variable SepalWidthCm is one of the most relevant variables.', 'Variable SepalWidthCm seems to be relevant for the majority of mining tasks.', 'Variables SepalLengthCm and PetalLengthCm seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 90%.', 'The number of False Negatives is lower than the number of True Positives for the presented tree.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The variable PetalWidthCm discriminates between the target values, as shown in the decision tree.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], the Decision Tree presented classifies (not A, B) as Iris-versicolor.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (not A, not B) as Iris-virginica for any k ≤ 38.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A, not B) as Iris-setosa for any k ≤ 35.']
30
- Iris_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
31
- Iris_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.']
32
- Iris_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.']
33
- Iris_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.']
34
- Iris_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.']
35
- Iris_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 30%.']
36
- Iris_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables PetalWidthCm or SepalLengthCm can be discarded without losing information.', 'The variable SepalLengthCm can be discarded without risking losing information.', 'Variables PetalWidthCm and SepalLengthCm seem to be useful for classification tasks.', 'Variables SepalWidthCm and PetalLengthCm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable PetalLengthCm seems to be relevant for the majority of mining tasks.', 'Variables PetalWidthCm and SepalWidthCm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable SepalLengthCm might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable SepalWidthCm previously than variable PetalLengthCm.']
37
- Iris_boxplots.png;A set of boxplots of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['Variable PetalWidthCm is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable SepalLengthCm shows some outliers, but we can’t be sure of the same for variable SepalWidthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable PetalWidthCm shows a high number of outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
38
- Iris_class_histogram.png;A bar chart showing the distribution of the target variable Species.;['Balancing this dataset would be mandatory to improve the results.']
39
- Iris_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
40
- Iris_histograms_numeric.png;A set of histograms of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['All variables, but the class, should be dealt with as numeric.', 'The variable PetalWidthCm can be seen as ordinal.', 'The variable SepalLengthCm can be seen as ordinal without losing information.', 'Variable PetalWidthCm is balanced.', 'It is clear that variable PetalLengthCm shows some outliers, but we can’t be sure of the same for variable SepalWidthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable SepalWidthCm shows some outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PetalWidthCm and SepalLengthCm variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for SepalWidthCm variable, dummification would be the most adequate encoding.', 'The variable PetalWidthCm can be coded as ordinal without losing information.', 'Feature generation based on variable SepalLengthCm seems to be promising.', 'Feature generation based on the use of variable PetalWidthCm wouldn’t be useful, but the use of SepalLengthCm seems to be promising.', 'Given the usual semantics of SepalWidthCm variable, dummification would have been a better codification.', 'It is better to drop the variable PetalWidthCm than removing all records with missing values.', 'Not knowing the semantics of SepalLengthCm variable, dummification could have been a more adequate codification.']
41
- phone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition int_memory <= 30.5 and the second with the condition mobile_wt <= 91.5.;['It is clear that variable sc_w is one of the four most relevant features.', 'The variable fc seems to be one of the two most relevant features.', 'The variable int_memory discriminates between the target values, as shown in the decision tree.', 'It is possible to state that sc_h is the second most discriminative variable regarding the class.', 'Variable pc is one of the most relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables ram and fc seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Negatives reported in the same tree is 30.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The variable sc_w seems to be one of the three most relevant features.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (not A, not B) as 2 for any k ≤ 636.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 469.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 469.']
42
- phone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
43
- phone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.']
44
- phone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.']
45
- phone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.']
46
- phone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.']
47
- phone_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 10 and 20%.']
48
- phone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['The intrinsic dimensionality of this dataset is 9.', 'One of the variables px_height or n_cores can be discarded without losing information.', 'The variable px_width can be discarded without risking losing information.', 'Variables battery_power and ram are redundant, but we can’t say the same for the pair px_width and pc.', 'Variables sc_w and battery_power are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables sc_w and n_cores seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable battery_power might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable sc_w previously than variable sc_h.']
49
- phone_boxplots.png;A set of boxplots of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['Variable sc_h is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable battery_power shows some outliers, but we can’t be sure of the same for variable talk_time.', 'Outliers seem to be a problem in the dataset.', 'Variable sc_h shows some outlier values.', 'Variable fc doesn’t have any outliers.', 'Variable talk_time presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
50
- phone_histograms_symbolic.png;A set of bar charts of the variables ['blue', 'dual_sim', 'four_g', 'three_g', 'touch_screen', 'wifi'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable four_g can be seen as ordinal.', 'The variable three_g can be seen as ordinal without losing information.', 'Considering the common semantics for touch_screen and blue variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for three_g variable, dummification would be the most adequate encoding.', 'The variable dual_sim can be coded as ordinal without losing information.', 'Feature generation based on variable four_g seems to be promising.', 'Feature generation based on the use of variable touch_screen wouldn’t be useful, but the use of blue seems to be promising.', 'Given the usual semantics of dual_sim variable, dummification would have been a better codification.', 'It is better to drop the variable wifi than removing all records with missing values.', 'Not knowing the semantics of touch_screen variable, dummification could have been a more adequate codification.']
51
- phone_class_histogram.png;A bar chart showing the distribution of the target variable price_range.;['Balancing this dataset would be mandatory to improve the results.']
52
- phone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
53
- phone_histograms_numeric.png;A set of histograms of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['All variables, but the class, should be dealt with as date.', 'The variable pc can be seen as ordinal.', 'The variable int_memory can be seen as ordinal without losing information.', 'Variable n_cores is balanced.', 'It is clear that variable int_memory shows some outliers, but we can’t be sure of the same for variable sc_h.', 'Outliers seem to be a problem in the dataset.', 'Variable talk_time shows a high number of outlier values.', 'Variable battery_power doesn’t have any outliers.', 'Variable ram presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for int_memory and battery_power variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mobile_wt variable, dummification would be the most adequate encoding.', 'The variable fc can be coded as ordinal without losing information.', 'Feature generation based on variable ram seems to be promising.', 'Feature generation based on the use of variable sc_w wouldn’t be useful, but the use of battery_power seems to be promising.', 'Given the usual semantics of px_width variable, dummification would have been a better codification.', 'It is better to drop the variable sc_w than removing all records with missing values.', 'Not knowing the semantics of battery_power variable, dummification could have been a more adequate codification.']
54
- Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['It is clear that variable Pclass is one of the five most relevant features.', 'The variable Pclass seems to be one of the four most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the first most discriminative variable regarding the class.', 'Variable Pclass is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 0.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 72.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, not B) as 0 for any k ≤ 181.']
55
- Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
56
- Titanic_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
57
- Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.']
58
- Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.']
59
- Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.']
60
- Titanic_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
61
- Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 20%.']
62
- Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Fare or Pclass can be discarded without losing information.', 'The variable Pclass can be discarded without risking losing information.', 'Variables Age and Parch are redundant, but we can’t say the same for the pair Fare and Pclass.', 'Variables SibSp and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and Fare seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Parch might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Parch previously than variable Age.']
63
- Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Fare is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Pclass.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Parch doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
64
- Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Embarked can be seen as ordinal.', 'The variable Embarked can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Embarked variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Sex variable, dummification could have been a more adequate codification.']
65
- Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Age would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Embarked than removing all records with missing values.']
66
- Titanic_class_histogram.png;A bar chart showing the distribution of the target variable Survived.;['Balancing this dataset would be mandatory to improve the results.']
67
- Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
68
- Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as date.', 'The variable Age can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Fare and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Fare variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable SibSp seems to be promising.', 'Feature generation based on the use of variable Fare wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable SibSp than removing all records with missing values.', 'Not knowing the semantics of Parch variable, dummification could have been a more adequate codification.']
69
- apple_quality_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Juiciness <= -0.3 and the second with the condition Crunchiness <= 2.25.;['It is clear that variable Sweetness is one of the four most relevant features.', 'The variable Sweetness seems to be one of the three most relevant features.', 'The variable Size discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Crunchiness is the second most discriminative variable regarding the class.', 'Variable Juiciness is one of the most relevant variables.', 'Variable Crunchiness seems to be relevant for the majority of mining tasks.', 'Variables Sweetness and Acidity seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 90%.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of True Positives reported in the same tree is 50.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 1625.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 148.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (A, not B) as bad for any k ≤ 148.']
70
- apple_quality_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
71
- apple_quality_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.']
72
- apple_quality_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.']
73
- apple_quality_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.']
74
- apple_quality_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.']
75
- apple_quality_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
76
- apple_quality_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 30%.']
77
- apple_quality_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Weight or Ripeness can be discarded without losing information.', 'The variable Juiciness can be discarded without risking losing information.', 'Variables Sweetness and Ripeness seem to be useful for classification tasks.', 'Variables Size and Ripeness are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Ripeness seems to be relevant for the majority of mining tasks.', 'Variables Size and Juiciness seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Size might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Acidity previously than variable Size.']
78
- apple_quality_boxplots.png;A set of boxplots of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['Variable Ripeness is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Juiciness shows some outliers, but we can’t be sure of the same for variable Sweetness.', 'Outliers seem to be a problem in the dataset.', 'Variable Crunchiness shows a high number of outlier values.', 'Variable Acidity doesn’t have any outliers.', 'Variable Ripeness presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
79
- apple_quality_class_histogram.png;A bar chart showing the distribution of the target variable Quality.;['Balancing this dataset would be mandatory to improve the results.']
80
- apple_quality_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
81
- apple_quality_histograms_numeric.png;A set of histograms of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['All variables, but the class, should be dealt with as binary.', 'The variable Ripeness can be seen as ordinal.', 'The variable Acidity can be seen as ordinal without losing information.', 'Variable Sweetness is balanced.', 'It is clear that variable Ripeness shows some outliers, but we can’t be sure of the same for variable Juiciness.', 'Outliers seem to be a problem in the dataset.', 'Variable Ripeness shows some outlier values.', 'Variable Weight doesn’t have any outliers.', 'Variable Juiciness presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Sweetness and Size variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sweetness variable, dummification would be the most adequate encoding.', 'The variable Juiciness can be coded as ordinal without losing information.', 'Feature generation based on variable Juiciness seems to be promising.', 'Feature generation based on the use of variable Acidity wouldn’t be useful, but the use of Size seems to be promising.', 'Given the usual semantics of Ripeness variable, dummification would have been a better codification.', 'It is better to drop the variable Juiciness than removing all records with missing values.', 'Not knowing the semantics of Crunchiness variable, dummification could have been a more adequate codification.']
82
- Employee_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition JoiningYear <= 2017.5 and the second with the condition ExperienceInCurrentDomain <= 3.5.;['It is clear that variable Age is one of the four most relevant features.', 'The variable JoiningYear seems to be one of the four most relevant features.', 'The variable ExperienceInCurrentDomain discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the second most discriminative variable regarding the class.', 'Variable ExperienceInCurrentDomain is one of the most relevant variables.', 'Variable JoiningYear seems to be relevant for the majority of mining tasks.', 'Variables JoiningYear and ExperienceInCurrentDomain seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is lower than 75%.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of False Positives reported in the same tree is 10.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 44.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 1.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 1215.']
83
- Employee_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.']
84
- Employee_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.']
85
- Employee_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.']
86
- Employee_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.']
87
- Employee_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.']
88
- Employee_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.']
89
- Employee_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.']
90
- Employee_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables JoiningYear or PaymentTier can be discarded without losing information.', 'The variable PaymentTier can be discarded without risking losing information.', 'Variables ExperienceInCurrentDomain and JoiningYear are redundant.', 'Variables JoiningYear and ExperienceInCurrentDomain are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable PaymentTier seems to be relevant for the majority of mining tasks.', 'Variables ExperienceInCurrentDomain and PaymentTier seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PaymentTier might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable ExperienceInCurrentDomain.']
91
- Employee_boxplots.png;A set of boxplots of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['Variable ExperienceInCurrentDomain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ExperienceInCurrentDomain shows some outliers, but we can’t be sure of the same for variable PaymentTier.', 'Outliers seem to be a problem in the dataset.', 'Variable PaymentTier shows a high number of outlier values.', 'Variable ExperienceInCurrentDomain doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.']
92
- Employee_histograms_symbolic.png;A set of bar charts of the variables ['Education', 'City', 'Gender', 'EverBenched'].;['All variables, but the class, should be dealt with as numeric.', 'The variable EverBenched can be seen as ordinal.', 'The variable EverBenched can be seen as ordinal without losing information.', 'Considering the common semantics for EverBenched and Education variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Education seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of Education seems to be promising.', 'Given the usual semantics of Education variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of City variable, dummification could have been a more adequate codification.']
93
- Employee_class_histogram.png;A bar chart showing the distribution of the target variable LeaveOrNot.;['Balancing this dataset would be mandatory to improve the results.']
94
- Employee_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.']
95
- Employee_histograms_numeric.png;A set of histograms of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['All variables, but the class, should be dealt with as numeric.', 'The variable JoiningYear can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable PaymentTier is balanced.', 'It is clear that variable JoiningYear shows some outliers, but we can’t be sure of the same for variable ExperienceInCurrentDomain.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows a high number of outlier values.', 'Variable JoiningYear doesn’t have any outliers.', 'Variable ExperienceInCurrentDomain presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PaymentTier and JoiningYear variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ExperienceInCurrentDomain variable, dummification would be the most adequate encoding.', 'The variable ExperienceInCurrentDomain can be coded as ordinal without losing information.', 'Feature generation based on variable JoiningYear seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of JoiningYear seems to be promising.', 'Given the usual semantics of PaymentTier variable, dummification would have been a better codification.', 'It is better to drop the variable JoiningYear than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
desc_questions_train_final.csv DELETED
The diff for this file is too large to render. See raw diff