Chart;description;Questions ObesityDataSet_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition FAF <= 2.0 and the second with the condition Height <= 1.72.;['The variable FAF discriminates between the target values, as shown in the decision tree.', 'Variable Height is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of True Negatives reported in the same tree is 50.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The variable FAF seems to be one of the two most relevant features.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that Naive Bayes algorithm classifies (not A, B), as Overweight_Level_I.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], the Decision Tree presented classifies (A, not B) as Obesity_Type_III.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that KNN algorithm classifies (A, not B) as Insufficient_Weight for any k ≤ 160.'] ObesityDataSet_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] ObesityDataSet_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] ObesityDataSet_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] ObesityDataSet_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] ObesityDataSet_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] ObesityDataSet_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 7 principal components would imply an error between 15 and 20%.'] ObesityDataSet_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables Age or Height can be discarded without losing information.', 'The variable Weight can be discarded without risking losing information.', 'Variables NCP and TUE are redundant, but we can’t say the same for the pair Weight and Height.', 'Variables FAF and TUE are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Height seems to be relevant for the majority of mining tasks.', 'Variables FAF and Height seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable CH2O might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable Height.'] ObesityDataSet_boxplots.png;A set of boxplots of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['Variable CH2O is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable FCVC shows some outliers, but we can’t be sure of the same for variable TUE.', 'Outliers seem to be a problem in the dataset.', 'Variable FAF shows some outlier values.', 'Variable NCP doesn’t have any outliers.', 'Variable Height presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] ObesityDataSet_histograms_symbolic.png;A set of bar charts of the variables ['CAEC', 'CALC', 'MTRANS', 'Gender', 'family_history_with_overweight', 'FAVC', 'SMOKE', 'SCC'].;['All variables, but the class, should be dealt with as numeric.', 'The variable SMOKE can be seen as ordinal.', 'The variable FAVC can be seen as ordinal without losing information.', 'Considering the common semantics for FAVC and CAEC variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for family_history_with_overweight variable, dummification would be the most adequate encoding.', 'The variable MTRANS can be coded as ordinal without losing information.', 'Feature generation based on variable family_history_with_overweight seems to be promising.', 'Feature generation based on the use of variable SCC wouldn’t be useful, but the use of CAEC seems to be promising.', 'Given the usual semantics of family_history_with_overweight variable, dummification would have been a better codification.', 'It is better to drop the variable CALC than removing all records with missing values.', 'Not knowing the semantics of family_history_with_overweight variable, dummification could have been a more adequate codification.'] ObesityDataSet_class_histogram.png;A bar chart showing the distribution of the target variable NObeyesdad.;['Balancing this dataset would be mandatory to improve the results.'] ObesityDataSet_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] ObesityDataSet_histograms_numeric.png;A set of histograms of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Height can be seen as ordinal.', 'The variable NCP can be seen as ordinal without losing information.', 'Variable FAF is balanced.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable CH2O.', 'Outliers seem to be a problem in the dataset.', 'Variable Height shows a high number of outlier values.', 'Variable TUE doesn’t have any outliers.', 'Variable FCVC presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Weight and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable Weight can be coded as ordinal without losing information.', 'Feature generation based on variable TUE seems to be promising.', 'Feature generation based on the use of variable Weight wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of FAF variable, dummification would have been a better codification.', 'It is better to drop the variable Age than removing all records with missing values.', 'Not knowing the semantics of CH2O variable, dummification could have been a more adequate codification.'] customer_segmentation_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Family_Size <= 2.5 and the second with the condition Work_Experience <= 9.5.;['The variable Family_Size discriminates between the target values, as shown in the decision tree.', 'Variable Work_Experience is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The recall for the presented tree is higher than its accuracy.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A,B) as B for any k ≤ 11.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A, not B) as C for any k ≤ 723.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (not A, B) as B for any k ≤ 524.'] customer_segmentation_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] customer_segmentation_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] customer_segmentation_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] customer_segmentation_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] customer_segmentation_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] customer_segmentation_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 20%.'] customer_segmentation_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Work_Experience', 'Family_Size'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Age or Family_Size can be discarded without losing information.', 'The variable Age can be discarded without risking losing information.', 'Variables Age and Work_Experience seem to be useful for classification tasks.', 'Variables Age and Work_Experience are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Family_Size seems to be relevant for the majority of mining tasks.', 'Variables Family_Size and Work_Experience seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Work_Experience might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable Family_Size.'] customer_segmentation_boxplots.png;A set of boxplots of the variables ['Age', 'Work_Experience', 'Family_Size'].;['Variable Family_Size is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Work_Experience shows some outliers, but we can’t be sure of the same for variable Family_Size.', 'Outliers seem to be a problem in the dataset.', 'Variable Work_Experience shows a high number of outlier values.', 'Variable Work_Experience doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] customer_segmentation_histograms_symbolic.png;A set of bar charts of the variables ['Profession', 'Spending_Score', 'Var_1', 'Gender', 'Ever_Married', 'Graduated'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Gender can be seen as ordinal.', 'The variable Ever_Married can be seen as ordinal without losing information.', 'Considering the common semantics for Var_1 and Profession variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Profession variable, dummification would be the most adequate encoding.', 'The variable Graduated can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Graduated wouldn’t be useful, but the use of Profession seems to be promising.', 'Given the usual semantics of Profession variable, dummification would have been a better codification.', 'It is better to drop the variable Graduated than removing all records with missing values.', 'Not knowing the semantics of Spending_Score variable, dummification could have been a more adequate codification.'] customer_segmentation_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Ever_Married', 'Graduated', 'Profession', 'Work_Experience', 'Family_Size', 'Var_1'].;['Discarding variable Var_1 would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Var_1 seems to be promising.', 'It is better to drop the variable Family_Size than removing all records with missing values.'] customer_segmentation_class_histogram.png;A bar chart showing the distribution of the target variable Segmentation.;['Balancing this dataset would be mandatory to improve the results.'] customer_segmentation_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] customer_segmentation_histograms_numeric.png;A set of histograms of the variables ['Age', 'Work_Experience', 'Family_Size'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Family_Size can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Family_Size is balanced.', 'It is clear that variable Work_Experience shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows some outlier values.', 'Variable Family_Size doesn’t have any outliers.', 'Variable Work_Experience presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Family_Size and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable Work_Experience seems to be promising.', 'Feature generation based on the use of variable Work_Experience wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable Age than removing all records with missing values.', 'Not knowing the semantics of Family_Size variable, dummification could have been a more adequate codification.'] urinalysis_tests_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 0.1 and the second with the condition pH <= 5.5.;['The variable Age discriminates between the target values, as shown in the decision tree.', 'Variable Age is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 60%.', 'The number of True Positives reported in the same tree is 10.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The recall for the presented tree is lower than its specificity.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], the Decision Tree presented classifies (not A, B) as NEGATIVE.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], the Decision Tree presented classifies (not A, B) as POSITIVE.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], it is possible to state that KNN algorithm classifies (not A, B) as NEGATIVE for any k ≤ 763.'] urinalysis_tests_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] urinalysis_tests_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] urinalysis_tests_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] urinalysis_tests_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] urinalysis_tests_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] urinalysis_tests_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] urinalysis_tests_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 20%.'] urinalysis_tests_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'pH', 'Specific Gravity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables pH or Age can be discarded without losing information.', 'The variable Age can be discarded without risking losing information.', 'Variables Specific Gravity and Age seem to be useful for classification tasks.', 'Variables Age and pH are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Specific Gravity seems to be relevant for the majority of mining tasks.', 'Variables Specific Gravity and pH seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable pH might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable pH.'] urinalysis_tests_boxplots.png;A set of boxplots of the variables ['Age', 'pH', 'Specific Gravity'].;['Variable pH is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Specific Gravity shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Specific Gravity shows a high number of outlier values.', 'Variable Age doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] urinalysis_tests_histograms_symbolic.png;A set of bar charts of the variables ['Color', 'Transparency', 'Glucose', 'Protein', 'Epithelial Cells', 'Mucous Threads', 'Amorphous Urates', 'Bacteria', 'Gender'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Gender can be seen as ordinal.', 'The variable Mucous Threads can be seen as ordinal without losing information.', 'Considering the common semantics for Epithelial Cells and Color variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Amorphous Urates variable, dummification would be the most adequate encoding.', 'The variable Color can be coded as ordinal without losing information.', 'Feature generation based on variable Amorphous Urates seems to be promising.', 'Feature generation based on the use of variable Protein wouldn’t be useful, but the use of Color seems to be promising.', 'Given the usual semantics of Bacteria variable, dummification would have been a better codification.', 'It is better to drop the variable Bacteria than removing all records with missing values.', 'Not knowing the semantics of Epithelial Cells variable, dummification could have been a more adequate codification.'] urinalysis_tests_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Color'].;['Discarding variable Color would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Color seems to be promising.', 'It is better to drop the variable Color than removing all records with missing values.'] urinalysis_tests_class_histogram.png;A bar chart showing the distribution of the target variable Diagnosis.;['Balancing this dataset would be mandatory to improve the results.'] urinalysis_tests_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] urinalysis_tests_histograms_numeric.png;A set of histograms of the variables ['Age', 'pH', 'Specific Gravity'].;['All variables, but the class, should be dealt with as binary.', 'The variable Specific Gravity can be seen as ordinal.', 'The variable Specific Gravity can be seen as ordinal without losing information.', 'Variable Specific Gravity is balanced.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable pH.', 'Outliers seem to be a problem in the dataset.', 'Variable Specific Gravity shows a high number of outlier values.', 'Variable Specific Gravity doesn’t have any outliers.', 'Variable Specific Gravity presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and pH variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable pH can be coded as ordinal without losing information.', 'Feature generation based on variable Age seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of pH seems to be promising.', 'Given the usual semantics of Specific Gravity variable, dummification would have been a better codification.', 'It is better to drop the variable pH than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.'] detect_dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Ic <= 71.01 and the second with the condition Vb <= -0.37.;['The variable Ic discriminates between the target values, as shown in the decision tree.', 'Variable Vb is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 75%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives reported in the same tree is 50.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 3.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], the Decision Tree presented classifies (A, not B) as 0.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], the Decision Tree presented classifies (A,B) as 0.'] detect_dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] detect_dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] detect_dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] detect_dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] detect_dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] detect_dataset_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] detect_dataset_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 10 and 20%.'] detect_dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Vc or Va can be discarded without losing information.', 'The variable Ic can be discarded without risking losing information.', 'Variables Ia and Ic are redundant, but we can’t say the same for the pair Vc and Vb.', 'Variables Ib and Vc are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Vb seems to be relevant for the majority of mining tasks.', 'Variables Ib and Ic seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Ic might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Ic previously than variable Va.'] detect_dataset_boxplots.png;A set of boxplots of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['Variable Vb is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Vb shows some outliers, but we can’t be sure of the same for variable Va.', 'Outliers seem to be a problem in the dataset.', 'Variable Vb shows some outlier values.', 'Variable Vb doesn’t have any outliers.', 'Variable Ia presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] detect_dataset_class_histogram.png;A bar chart showing the distribution of the target variable Output.;['Balancing this dataset would be mandatory to improve the results.'] detect_dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] detect_dataset_histograms_numeric.png;A set of histograms of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['All variables, but the class, should be dealt with as date.', 'The variable Ic can be seen as ordinal.', 'The variable Vc can be seen as ordinal without losing information.', 'Variable Ia is balanced.', 'It is clear that variable Va shows some outliers, but we can’t be sure of the same for variable Vc.', 'Outliers seem to be a problem in the dataset.', 'Variable Ia shows a high number of outlier values.', 'Variable Ic doesn’t have any outliers.', 'Variable Ic presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Ia and Ib variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Vc variable, dummification would be the most adequate encoding.', 'The variable Vb can be coded as ordinal without losing information.', 'Feature generation based on variable Vb seems to be promising.', 'Feature generation based on the use of variable Ic wouldn’t be useful, but the use of Ia seems to be promising.', 'Given the usual semantics of Ib variable, dummification would have been a better codification.', 'It is better to drop the variable Ia than removing all records with missing values.', 'Not knowing the semantics of Ia variable, dummification could have been a more adequate codification.'] diabetes_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition BMI <= 29.85 and the second with the condition Age <= 27.5.;['The variable BMI discriminates between the target values, as shown in the decision tree.', 'Variable BMI is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of True Positives reported in the same tree is 30.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The accuracy for the presented tree is higher than its specificity.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], the Decision Tree presented classifies (not A, not B) as 1.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], the Decision Tree presented classifies (not A, not B) as 1.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 98.'] diabetes_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] diabetes_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] diabetes_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] diabetes_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] diabetes_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] diabetes_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] diabetes_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 20%.'] diabetes_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables Age or Insulin can be discarded without losing information.', 'The variable DiabetesPedigreeFunction can be discarded without risking losing information.', 'Variables Age and SkinThickness are redundant, but we can’t say the same for the pair BMI and BloodPressure.', 'Variables DiabetesPedigreeFunction and Age are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable SkinThickness seems to be relevant for the majority of mining tasks.', 'Variables Insulin and Glucose seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Insulin might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable DiabetesPedigreeFunction previously than variable Pregnancies.'] diabetes_boxplots.png;A set of boxplots of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['Variable DiabetesPedigreeFunction is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Glucose shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Pregnancies shows some outlier values.', 'Variable Insulin doesn’t have any outliers.', 'Variable BMI presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] diabetes_class_histogram.png;A bar chart showing the distribution of the target variable Outcome.;['Balancing this dataset would be mandatory to improve the results.'] diabetes_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] diabetes_histograms_numeric.png;A set of histograms of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Age can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Pregnancies is balanced.', 'It is clear that variable DiabetesPedigreeFunction shows some outliers, but we can’t be sure of the same for variable Glucose.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable BMI doesn’t have any outliers.', 'Variable BloodPressure presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for BloodPressure and Pregnancies variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for BMI variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable BMI seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of Pregnancies seems to be promising.', 'Given the usual semantics of BMI variable, dummification would have been a better codification.', 'It is better to drop the variable BMI than removing all records with missing values.', 'Not knowing the semantics of SkinThickness variable, dummification could have been a more adequate codification.'] Placement_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition ssc_p <= 60.09 and the second with the condition hsc_p <= 70.24.;['The variable ssc_p discriminates between the target values, as shown in the decision tree.', 'Variable hsc_p is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 90%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The accuracy for the presented tree is higher than 75%.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (not A, not B) as Placed for any k ≤ 68.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (not A, not B) as Placed for any k ≤ 68.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], the Decision Tree presented classifies (A, not B) as Placed.'] Placement_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] Placement_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] Placement_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] Placement_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] Placement_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] Placement_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] Placement_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 30%.'] Placement_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables hsc_p or mba_p can be discarded without losing information.', 'The variable mba_p can be discarded without risking losing information.', 'Variables hsc_p and ssc_p are redundant, but we can’t say the same for the pair degree_p and etest_p.', 'Variables hsc_p and etest_p are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ssc_p seems to be relevant for the majority of mining tasks.', 'Variables hsc_p and degree_p seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable degree_p might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable hsc_p previously than variable mba_p.'] Placement_boxplots.png;A set of boxplots of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['Variable etest_p is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable mba_p shows some outliers, but we can’t be sure of the same for variable ssc_p.', 'Outliers seem to be a problem in the dataset.', 'Variable hsc_p shows some outlier values.', 'Variable hsc_p doesn’t have any outliers.', 'Variable hsc_p presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Placement_histograms_symbolic.png;A set of bar charts of the variables ['hsc_s', 'degree_t', 'gender', 'ssc_b', 'hsc_b', 'workex', 'specialisation'].;['All variables, but the class, should be dealt with as numeric.', 'The variable degree_t can be seen as ordinal.', 'The variable specialisation can be seen as ordinal without losing information.', 'Considering the common semantics for specialisation and hsc_s variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for specialisation variable, dummification would be the most adequate encoding.', 'The variable ssc_b can be coded as ordinal without losing information.', 'Feature generation based on variable hsc_s seems to be promising.', 'Feature generation based on the use of variable hsc_s wouldn’t be useful, but the use of degree_t seems to be promising.', 'Given the usual semantics of hsc_s variable, dummification would have been a better codification.', 'It is better to drop the variable ssc_b than removing all records with missing values.', 'Not knowing the semantics of hsc_b variable, dummification could have been a more adequate codification.'] Placement_class_histogram.png;A bar chart showing the distribution of the target variable status.;['Balancing this dataset would be mandatory to improve the results.'] Placement_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Placement_histograms_numeric.png;A set of histograms of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['All variables, but the class, should be dealt with as numeric.', 'The variable etest_p can be seen as ordinal.', 'The variable mba_p can be seen as ordinal without losing information.', 'Variable degree_p is balanced.', 'It is clear that variable mba_p shows some outliers, but we can’t be sure of the same for variable hsc_p.', 'Outliers seem to be a problem in the dataset.', 'Variable mba_p shows some outlier values.', 'Variable ssc_p doesn’t have any outliers.', 'Variable degree_p presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ssc_p and hsc_p variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ssc_p variable, dummification would be the most adequate encoding.', 'The variable degree_p can be coded as ordinal without losing information.', 'Feature generation based on variable ssc_p seems to be promising.', 'Feature generation based on the use of variable etest_p wouldn’t be useful, but the use of ssc_p seems to be promising.', 'Given the usual semantics of degree_p variable, dummification would have been a better codification.', 'It is better to drop the variable etest_p than removing all records with missing values.', 'Not knowing the semantics of hsc_p variable, dummification could have been a more adequate codification.'] Liver_Patient_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Alkphos <= 211.5 and the second with the condition Sgot <= 26.5.;['The variable Sgot discriminates between the target values, as shown in the decision tree.', 'Variable Alkphos is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 90%.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The precision for the presented tree is higher than its recall.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 1.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that KNN algorithm classifies (not A, not B) as 2 for any k ≤ 94.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], the Decision Tree presented classifies (not A, B) as 1.'] Liver_Patient_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] Liver_Patient_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] Liver_Patient_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] Liver_Patient_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] Liver_Patient_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] Liver_Patient_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] Liver_Patient_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 10 and 25%.'] Liver_Patient_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['The intrinsic dimensionality of this dataset is 8.', 'One of the variables ALB or DB can be discarded without losing information.', 'The variable AG_Ratio can be discarded without risking losing information.', 'Variables AG_Ratio and DB are redundant, but we can’t say the same for the pair Sgpt and Sgot.', 'Variables Sgpt and AG_Ratio are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Sgpt seems to be relevant for the majority of mining tasks.', 'Variables Age and DB seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable DB might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable DB previously than variable TB.'] Liver_Patient_boxplots.png;A set of boxplots of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['Variable ALB is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Sgpt shows some outliers, but we can’t be sure of the same for variable TP.', 'Outliers seem to be a problem in the dataset.', 'Variable Sgot shows a high number of outlier values.', 'Variable TP doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Liver_Patient_histograms_symbolic.png;A set of bar charts of the variables ['Gender'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Gender can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Gender and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of Gender variable, dummification could have been a more adequate codification.'] Liver_Patient_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['AG_Ratio'].;['Discarding variable AG_Ratio would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable AG_Ratio seems to be promising.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.'] Liver_Patient_class_histogram.png;A bar chart showing the distribution of the target variable Selector.;['Balancing this dataset would be mandatory to improve the results.'] Liver_Patient_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Liver_Patient_histograms_numeric.png;A set of histograms of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['All variables, but the class, should be dealt with as binary.', 'The variable ALB can be seen as ordinal.', 'The variable AG_Ratio can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Sgot shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable ALB shows a high number of outlier values.', 'Variable DB doesn’t have any outliers.', 'Variable Alkphos presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and TB variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for TB variable, dummification would be the most adequate encoding.', 'The variable AG_Ratio can be coded as ordinal without losing information.', 'Feature generation based on variable ALB seems to be promising.', 'Feature generation based on the use of variable Sgpt wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Alkphos variable, dummification would have been a better codification.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.', 'Not knowing the semantics of AG_Ratio variable, dummification could have been a more adequate codification.'] Hotel_Reservations_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition lead_time <= 151.5 and the second with the condition no_of_special_requests <= 2.5.;['The variable lead_time discriminates between the target values, as shown in the decision tree.', 'Variable no_of_special_requests is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The variable lead_time discriminates between the target values, as shown in the decision tree.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A, not B) as Canceled for any k ≤ 4955.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A,B) as Not_Canceled for any k ≤ 10612.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A,B) as Canceled for any k ≤ 9756.'] Hotel_Reservations_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] Hotel_Reservations_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] Hotel_Reservations_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] Hotel_Reservations_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] Hotel_Reservations_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] Hotel_Reservations_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] Hotel_Reservations_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 10 and 20%.'] Hotel_Reservations_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables arrival_month or no_of_special_requests can be discarded without losing information.', 'The variable no_of_adults can be discarded without risking losing information.', 'Variables no_of_adults and arrival_month are redundant, but we can’t say the same for the pair no_of_week_nights and no_of_weekend_nights.', 'Variables no_of_adults and no_of_week_nights are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable arrival_month seems to be relevant for the majority of mining tasks.', 'Variables arrival_month and no_of_adults seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable no_of_adults might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable arrival_date previously than variable no_of_week_nights.'] Hotel_Reservations_boxplots.png;A set of boxplots of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['Variable arrival_date is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable no_of_weekend_nights shows some outliers, but we can’t be sure of the same for variable lead_time.', 'Outliers seem to be a problem in the dataset.', 'Variable no_of_week_nights shows a high number of outlier values.', 'Variable no_of_week_nights doesn’t have any outliers.', 'Variable avg_price_per_room presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Hotel_Reservations_histograms_symbolic.png;A set of bar charts of the variables ['type_of_meal_plan', 'room_type_reserved', 'required_car_parking_space', 'arrival_year', 'repeated_guest'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable room_type_reserved can be seen as ordinal.', 'The variable type_of_meal_plan can be seen as ordinal without losing information.', 'Considering the common semantics for arrival_year and type_of_meal_plan variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for type_of_meal_plan variable, dummification would be the most adequate encoding.', 'The variable type_of_meal_plan can be coded as ordinal without losing information.', 'Feature generation based on variable arrival_year seems to be promising.', 'Feature generation based on the use of variable required_car_parking_space wouldn’t be useful, but the use of type_of_meal_plan seems to be promising.', 'Given the usual semantics of required_car_parking_space variable, dummification would have been a better codification.', 'It is better to drop the variable required_car_parking_space than removing all records with missing values.', 'Not knowing the semantics of arrival_year variable, dummification could have been a more adequate codification.'] Hotel_Reservations_class_histogram.png;A bar chart showing the distribution of the target variable booking_status.;['Balancing this dataset would be mandatory to improve the results.'] Hotel_Reservations_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Hotel_Reservations_histograms_numeric.png;A set of histograms of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['All variables, but the class, should be dealt with as date.', 'The variable arrival_date can be seen as ordinal.', 'The variable no_of_children can be seen as ordinal without losing information.', 'Variable no_of_children is balanced.', 'It is clear that variable no_of_special_requests shows some outliers, but we can’t be sure of the same for variable avg_price_per_room.', 'Outliers seem to be a problem in the dataset.', 'Variable arrival_date shows a high number of outlier values.', 'Variable no_of_adults doesn’t have any outliers.', 'Variable no_of_weekend_nights presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for arrival_date and no_of_adults variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for no_of_special_requests variable, dummification would be the most adequate encoding.', 'The variable avg_price_per_room can be coded as ordinal without losing information.', 'Feature generation based on variable no_of_special_requests seems to be promising.', 'Feature generation based on the use of variable no_of_week_nights wouldn’t be useful, but the use of no_of_adults seems to be promising.', 'Given the usual semantics of no_of_adults variable, dummification would have been a better codification.', 'It is better to drop the variable arrival_date than removing all records with missing values.', 'Not knowing the semantics of no_of_week_nights variable, dummification could have been a more adequate codification.'] StressLevelDataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition basic_needs <= 3.5 and the second with the condition bullying <= 1.5.;['The variable bullying discriminates between the target values, as shown in the decision tree.', 'Variable basic_needs is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of False Positives reported in the same tree is 30.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The variable basic_needs seems to be one of the four most relevant features.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], the Decision Tree presented classifies (A, not B) as 2.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 271.'] StressLevelDataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] StressLevelDataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] StressLevelDataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] StressLevelDataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] StressLevelDataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] StressLevelDataset_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 25%.'] StressLevelDataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables headache or bullying can be discarded without losing information.', 'The variable breathing_problem can be discarded without risking losing information.', 'Variables anxiety_level and bullying are redundant, but we can’t say the same for the pair study_load and living_conditions.', 'Variables bullying and depression are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable breathing_problem seems to be relevant for the majority of mining tasks.', 'Variables living_conditions and breathing_problem seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable basic_needs might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable basic_needs previously than variable self_esteem.'] StressLevelDataset_boxplots.png;A set of boxplots of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['Variable study_load is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable self_esteem shows some outliers, but we can’t be sure of the same for variable anxiety_level.', 'Outliers seem to be a problem in the dataset.', 'Variable basic_needs shows some outlier values.', 'Variable headache doesn’t have any outliers.', 'Variable depression presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] StressLevelDataset_histograms_symbolic.png;A set of bar charts of the variables ['mental_health_history'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable mental_health_history can be seen as ordinal.', 'The variable mental_health_history can be seen as ordinal without losing information.', 'Considering the common semantics for mental_health_history and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mental_health_history variable, dummification would be the most adequate encoding.', 'The variable mental_health_history can be coded as ordinal without losing information.', 'Feature generation based on variable mental_health_history seems to be promising.', 'Feature generation based on the use of variable mental_health_history wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of mental_health_history variable, dummification would have been a better codification.', 'It is better to drop the variable mental_health_history than removing all records with missing values.', 'Not knowing the semantics of mental_health_history variable, dummification could have been a more adequate codification.'] StressLevelDataset_class_histogram.png;A bar chart showing the distribution of the target variable stress_level.;['Balancing this dataset would be mandatory to improve the results.'] StressLevelDataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] StressLevelDataset_histograms_numeric.png;A set of histograms of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable living_conditions can be seen as ordinal.', 'The variable breathing_problem can be seen as ordinal without losing information.', 'Variable breathing_problem is balanced.', 'It is clear that variable depression shows some outliers, but we can’t be sure of the same for variable study_load.', 'Outliers seem to be a problem in the dataset.', 'Variable bullying shows a high number of outlier values.', 'Variable headache doesn’t have any outliers.', 'Variable anxiety_level presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for sleep_quality and anxiety_level variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for headache variable, dummification would be the most adequate encoding.', 'The variable breathing_problem can be coded as ordinal without losing information.', 'Feature generation based on variable self_esteem seems to be promising.', 'Feature generation based on the use of variable anxiety_level wouldn’t be useful, but the use of self_esteem seems to be promising.', 'Given the usual semantics of study_load variable, dummification would have been a better codification.', 'It is better to drop the variable depression than removing all records with missing values.', 'Not knowing the semantics of basic_needs variable, dummification could have been a more adequate codification.'] WineQT_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition density <= 1.0 and the second with the condition chlorides <= 0.08.;['The variable chlorides discriminates between the target values, as shown in the decision tree.', 'Variable density is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 75%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives reported in the same tree is 10.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that KNN algorithm classifies (not A, not B) as 6 for any k ≤ 447.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 3.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that KNN algorithm classifies (not A, not B) as 5 for any k ≤ 172.'] WineQT_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] WineQT_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] WineQT_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] WineQT_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] WineQT_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] WineQT_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 15 and 25%.'] WineQT_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables citric acid or residual sugar can be discarded without losing information.', 'The variable chlorides can be discarded without risking losing information.', 'Variables sulphates and pH are redundant, but we can’t say the same for the pair free sulfur dioxide and volatile acidity.', 'Variables free sulfur dioxide and total sulfur dioxide are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable volatile acidity seems to be relevant for the majority of mining tasks.', 'Variables chlorides and citric acid seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable fixed acidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable pH previously than variable chlorides.'] WineQT_boxplots.png;A set of boxplots of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['Variable citric acid is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable pH shows some outliers, but we can’t be sure of the same for variable volatile acidity.', 'Outliers seem to be a problem in the dataset.', 'Variable free sulfur dioxide shows a high number of outlier values.', 'Variable chlorides doesn’t have any outliers.', 'Variable total sulfur dioxide presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] WineQT_class_histogram.png;A bar chart showing the distribution of the target variable quality.;['Balancing this dataset would be mandatory to improve the results.'] WineQT_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] WineQT_histograms_numeric.png;A set of histograms of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['All variables, but the class, should be dealt with as numeric.', 'The variable fixed acidity can be seen as ordinal.', 'The variable pH can be seen as ordinal without losing information.', 'Variable free sulfur dioxide is balanced.', 'It is clear that variable alcohol shows some outliers, but we can’t be sure of the same for variable sulphates.', 'Outliers seem to be a problem in the dataset.', 'Variable sulphates shows a high number of outlier values.', 'Variable pH doesn’t have any outliers.', 'Variable citric acid presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for citric acid and fixed acidity variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for citric acid variable, dummification would be the most adequate encoding.', 'The variable pH can be coded as ordinal without losing information.', 'Feature generation based on variable density seems to be promising.', 'Feature generation based on the use of variable sulphates wouldn’t be useful, but the use of fixed acidity seems to be promising.', 'Given the usual semantics of citric acid variable, dummification would have been a better codification.', 'It is better to drop the variable free sulfur dioxide than removing all records with missing values.', 'Not knowing the semantics of pH variable, dummification could have been a more adequate codification.'] loan_data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Loan_Amount_Term <= 420.0 and the second with the condition ApplicantIncome <= 1519.0.;['The variable ApplicantIncome discriminates between the target values, as shown in the decision tree.', 'Variable ApplicantIncome is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is lower than 90%.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'The recall for the presented tree is higher than its accuracy.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that KNN algorithm classifies (not A, not B) as Y for any k ≤ 3.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that KNN algorithm classifies (not A, B) as N for any k ≤ 204.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as N.'] loan_data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] loan_data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] loan_data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] loan_data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] loan_data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] loan_data_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] loan_data_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 20%.'] loan_data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables CoapplicantIncome or ApplicantIncome can be discarded without losing information.', 'The variable CoapplicantIncome can be discarded without risking losing information.', 'Variables ApplicantIncome and LoanAmount seem to be useful for classification tasks.', 'Variables Loan_Amount_Term and CoapplicantIncome are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ApplicantIncome seems to be relevant for the majority of mining tasks.', 'Variables CoapplicantIncome and ApplicantIncome seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable LoanAmount might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable CoapplicantIncome previously than variable Loan_Amount_Term.'] loan_data_boxplots.png;A set of boxplots of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['Variable Loan_Amount_Term is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Loan_Amount_Term shows some outliers, but we can’t be sure of the same for variable ApplicantIncome.', 'Outliers seem to be a problem in the dataset.', 'Variable ApplicantIncome shows a high number of outlier values.', 'Variable Loan_Amount_Term doesn’t have any outliers.', 'Variable ApplicantIncome presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] loan_data_histograms_symbolic.png;A set of bar charts of the variables ['Dependents', 'Property_Area', 'Gender', 'Married', 'Education', 'Self_Employed', 'Credit_History'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Credit_History can be seen as ordinal.', 'The variable Married can be seen as ordinal without losing information.', 'Considering the common semantics for Credit_History and Dependents variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Property_Area variable, dummification would be the most adequate encoding.', 'The variable Dependents can be coded as ordinal without losing information.', 'Feature generation based on variable Dependents seems to be promising.', 'Feature generation based on the use of variable Self_Employed wouldn’t be useful, but the use of Dependents seems to be promising.', 'Given the usual semantics of Education variable, dummification would have been a better codification.', 'It is better to drop the variable Property_Area than removing all records with missing values.', 'Not knowing the semantics of Dependents variable, dummification could have been a more adequate codification.'] loan_data_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Gender', 'Dependents', 'Self_Employed', 'Loan_Amount_Term', 'Credit_History'].;['Discarding variable Gender would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Dependents seems to be promising.', 'It is better to drop the variable Self_Employed than removing all records with missing values.'] loan_data_class_histogram.png;A bar chart showing the distribution of the target variable Loan_Status.;['Balancing this dataset would be mandatory to improve the results.'] loan_data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] loan_data_histograms_numeric.png;A set of histograms of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['All variables, but the class, should be dealt with as date.', 'The variable Loan_Amount_Term can be seen as ordinal.', 'The variable CoapplicantIncome can be seen as ordinal without losing information.', 'Variable LoanAmount is balanced.', 'It is clear that variable LoanAmount shows some outliers, but we can’t be sure of the same for variable Loan_Amount_Term.', 'Outliers seem to be a problem in the dataset.', 'Variable Loan_Amount_Term shows a high number of outlier values.', 'Variable Loan_Amount_Term doesn’t have any outliers.', 'Variable LoanAmount presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for LoanAmount and ApplicantIncome variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for LoanAmount variable, dummification would be the most adequate encoding.', 'The variable LoanAmount can be coded as ordinal without losing information.', 'Feature generation based on variable LoanAmount seems to be promising.', 'Feature generation based on the use of variable CoapplicantIncome wouldn’t be useful, but the use of ApplicantIncome seems to be promising.', 'Given the usual semantics of ApplicantIncome variable, dummification would have been a better codification.', 'It is better to drop the variable ApplicantIncome than removing all records with missing values.', 'Not knowing the semantics of Loan_Amount_Term variable, dummification could have been a more adequate codification.'] Dry_Bean_Dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Area <= 39172.5 and the second with the condition AspectRation <= 1.86.;['The variable Area discriminates between the target values, as shown in the decision tree.', 'Variable AspectRation is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The precision for the presented tree is higher than its specificity.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (not A, not B) as SEKER for any k ≤ 1284.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], the Decision Tree presented classifies (not A, B) as BOMBAY.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (A,B) as DERMASON for any k ≤ 2501.'] Dry_Bean_Dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] Dry_Bean_Dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] Dry_Bean_Dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] Dry_Bean_Dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] Dry_Bean_Dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] Dry_Bean_Dataset_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] Dry_Bean_Dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['The intrinsic dimensionality of this dataset is 9.', 'One of the variables MinorAxisLength or Eccentricity can be discarded without losing information.', 'The variable Eccentricity can be discarded without risking losing information.', 'Variables MinorAxisLength and Solidity are redundant, but we can’t say the same for the pair ShapeFactor1 and Extent.', 'Variables roundness and ShapeFactor1 are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ShapeFactor1 seems to be relevant for the majority of mining tasks.', 'Variables Perimeter and Eccentricity seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Solidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Eccentricity previously than variable EquivDiameter.'] Dry_Bean_Dataset_boxplots.png;A set of boxplots of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['Variable MinorAxisLength is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Solidity shows some outliers, but we can’t be sure of the same for variable EquivDiameter.', 'Outliers seem to be a problem in the dataset.', 'Variable Solidity shows some outlier values.', 'Variable roundness doesn’t have any outliers.', 'Variable Eccentricity presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Dry_Bean_Dataset_class_histogram.png;A bar chart showing the distribution of the target variable Class.;['Balancing this dataset would be mandatory to improve the results.'] Dry_Bean_Dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Dry_Bean_Dataset_histograms_numeric.png;A set of histograms of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['All variables, but the class, should be dealt with as date.', 'The variable Perimeter can be seen as ordinal.', 'The variable Extent can be seen as ordinal without losing information.', 'Variable Solidity is balanced.', 'It is clear that variable EquivDiameter shows some outliers, but we can’t be sure of the same for variable MinorAxisLength.', 'Outliers seem to be a problem in the dataset.', 'Variable Area shows some outlier values.', 'Variable roundness doesn’t have any outliers.', 'Variable Solidity presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for AspectRation and Area variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for EquivDiameter variable, dummification would be the most adequate encoding.', 'The variable roundness can be coded as ordinal without losing information.', 'Feature generation based on variable EquivDiameter seems to be promising.', 'Feature generation based on the use of variable MinorAxisLength wouldn’t be useful, but the use of Area seems to be promising.', 'Given the usual semantics of roundness variable, dummification would have been a better codification.', 'It is better to drop the variable Solidity than removing all records with missing values.', 'Not knowing the semantics of Perimeter variable, dummification could have been a more adequate codification.'] credit_customers_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition existing_credits <= 1.5 and the second with the condition residence_since <= 3.5.;['The variable residence_since discriminates between the target values, as shown in the decision tree.', 'Variable residence_since is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 90%.', 'The number of False Negatives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The accuracy for the presented tree is higher than its recall.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 107.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as bad.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 264.'] credit_customers_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] credit_customers_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] credit_customers_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] credit_customers_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] credit_customers_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] credit_customers_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] credit_customers_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 5 and 20%.'] credit_customers_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables age or credit_amount can be discarded without losing information.', 'The variable existing_credits can be discarded without risking losing information.', 'Variables existing_credits and credit_amount are redundant, but we can’t say the same for the pair duration and installment_commitment.', 'Variables residence_since and existing_credits are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables age and installment_commitment seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable credit_amount might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable existing_credits previously than variable credit_amount.'] credit_customers_boxplots.png;A set of boxplots of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['Variable existing_credits is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable age shows some outliers, but we can’t be sure of the same for variable residence_since.', 'Outliers seem to be a problem in the dataset.', 'Variable age shows some outlier values.', 'Variable residence_since doesn’t have any outliers.', 'Variable age presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] credit_customers_histograms_symbolic.png;A set of bar charts of the variables ['checking_status', 'employment', 'other_parties', 'other_payment_plans', 'housing', 'num_dependents', 'own_telephone', 'foreign_worker'].;['All variables, but the class, should be dealt with as numeric.', 'The variable other_parties can be seen as ordinal.', 'The variable employment can be seen as ordinal without losing information.', 'Considering the common semantics for checking_status and employment variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for housing variable, dummification would be the most adequate encoding.', 'The variable checking_status can be coded as ordinal without losing information.', 'Feature generation based on variable num_dependents seems to be promising.', 'Feature generation based on the use of variable employment wouldn’t be useful, but the use of checking_status seems to be promising.', 'Given the usual semantics of own_telephone variable, dummification would have been a better codification.', 'It is better to drop the variable num_dependents than removing all records with missing values.', 'Not knowing the semantics of num_dependents variable, dummification could have been a more adequate codification.'] credit_customers_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] credit_customers_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] credit_customers_histograms_numeric.png;A set of histograms of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['All variables, but the class, should be dealt with as binary.', 'The variable credit_amount can be seen as ordinal.', 'The variable age can be seen as ordinal without losing information.', 'Variable duration is balanced.', 'It is clear that variable age shows some outliers, but we can’t be sure of the same for variable credit_amount.', 'Outliers seem to be a problem in the dataset.', 'Variable residence_since shows some outlier values.', 'Variable credit_amount doesn’t have any outliers.', 'Variable existing_credits presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for residence_since and duration variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for installment_commitment variable, dummification would be the most adequate encoding.', 'The variable age can be coded as ordinal without losing information.', 'Feature generation based on variable residence_since seems to be promising.', 'Feature generation based on the use of variable credit_amount wouldn’t be useful, but the use of duration seems to be promising.', 'Given the usual semantics of age variable, dummification would have been a better codification.', 'It is better to drop the variable residence_since than removing all records with missing values.', 'Not knowing the semantics of installment_commitment variable, dummification could have been a more adequate codification.'] weatherAUS_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rainfall <= 0.1 and the second with the condition Pressure3pm <= 1009.65.;['The variable Pressure3pm discriminates between the target values, as shown in the decision tree.', 'Variable Rainfall is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'The accuracy for the presented tree is higher than 75%.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], the Decision Tree presented classifies (not A, not B) as No.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], the Decision Tree presented classifies (not A, B) as Yes.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as No.'] weatherAUS_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] weatherAUS_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] weatherAUS_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] weatherAUS_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] weatherAUS_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] weatherAUS_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] weatherAUS_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 5 and 20%.'] weatherAUS_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables Pressure9am or Pressure3pm can be discarded without losing information.', 'The variable Pressure9am can be discarded without risking losing information.', 'Variables Rainfall and Pressure3pm are redundant, but we can’t say the same for the pair Pressure9am and Cloud3pm.', 'Variables Temp3pm and Rainfall are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Temp3pm seems to be relevant for the majority of mining tasks.', 'Variables Pressure9am and Cloud3pm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Cloud9am might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Cloud9am previously than variable Pressure9am.'] weatherAUS_boxplots.png;A set of boxplots of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['Variable Pressure9am is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Cloud9am shows some outliers, but we can’t be sure of the same for variable WindSpeed9am.', 'Outliers seem to be a problem in the dataset.', 'Variable Rainfall shows a high number of outlier values.', 'Variable Cloud9am doesn’t have any outliers.', 'Variable Cloud9am presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] weatherAUS_histograms_symbolic.png;A set of bar charts of the variables ['Location', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'RainToday'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable WindDir9am can be seen as ordinal.', 'The variable WindDir3pm can be seen as ordinal without losing information.', 'Considering the common semantics for Location and WindGustDir variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for WindGustDir variable, dummification would be the most adequate encoding.', 'The variable WindDir3pm can be coded as ordinal without losing information.', 'Feature generation based on variable WindDir3pm seems to be promising.', 'Feature generation based on the use of variable WindDir3pm wouldn’t be useful, but the use of Location seems to be promising.', 'Given the usual semantics of RainToday variable, dummification would have been a better codification.', 'It is better to drop the variable Location than removing all records with missing values.', 'Not knowing the semantics of WindGustDir variable, dummification could have been a more adequate codification.'] weatherAUS_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Rainfall', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm', 'RainToday'].;['Discarding variable Pressure9am would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable RainToday seems to be promising.', 'It is better to drop the variable Cloud9am than removing all records with missing values.'] weatherAUS_class_histogram.png;A bar chart showing the distribution of the target variable RainTomorrow.;['Balancing this dataset would be mandatory to improve the results.'] weatherAUS_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] weatherAUS_histograms_numeric.png;A set of histograms of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['All variables, but the class, should be dealt with as date.', 'The variable Rainfall can be seen as ordinal.', 'The variable Pressure3pm can be seen as ordinal without losing information.', 'Variable Cloud3pm is balanced.', 'It is clear that variable Pressure9am shows some outliers, but we can’t be sure of the same for variable Rainfall.', 'Outliers seem to be a problem in the dataset.', 'Variable Pressure9am shows some outlier values.', 'Variable Cloud3pm doesn’t have any outliers.', 'Variable Pressure9am presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Pressure3pm and Rainfall variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Rainfall variable, dummification would be the most adequate encoding.', 'The variable Cloud9am can be coded as ordinal without losing information.', 'Feature generation based on variable Temp3pm seems to be promising.', 'Feature generation based on the use of variable Cloud9am wouldn’t be useful, but the use of Rainfall seems to be promising.', 'Given the usual semantics of WindSpeed9am variable, dummification would have been a better codification.', 'It is better to drop the variable Rainfall than removing all records with missing values.', 'Not knowing the semantics of Rainfall variable, dummification could have been a more adequate codification.'] car_insurance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition displacement <= 1196.5 and the second with the condition height <= 1519.0.;['The variable displacement discriminates between the target values, as shown in the decision tree.', 'Variable displacement is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 60%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 2141.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 686.'] car_insurance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] car_insurance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] car_insurance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] car_insurance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] car_insurance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] car_insurance_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] car_insurance_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 5 and 25%.'] car_insurance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables age_of_car or airbags can be discarded without losing information.', 'The variable length can be discarded without risking losing information.', 'Variables age_of_car and policy_tenure are redundant, but we can’t say the same for the pair height and length.', 'Variables age_of_car and gross_weight are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable height seems to be relevant for the majority of mining tasks.', 'Variables gross_weight and width seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable length might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable length previously than variable gross_weight.'] car_insurance_boxplots.png;A set of boxplots of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['Variable height is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable displacement shows some outliers, but we can’t be sure of the same for variable policy_tenure.', 'Outliers seem to be a problem in the dataset.', 'Variable airbags shows some outlier values.', 'Variable width doesn’t have any outliers.', 'Variable length presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] car_insurance_histograms_symbolic.png;A set of bar charts of the variables ['area_cluster', 'segment', 'model', 'fuel_type', 'max_torque', 'max_power', 'steering_type', 'is_esc', 'is_adjustable_steering'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable segment can be seen as ordinal.', 'The variable is_esc can be seen as ordinal without losing information.', 'Considering the common semantics for segment and area_cluster variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for max_torque variable, dummification would be the most adequate encoding.', 'The variable max_torque can be coded as ordinal without losing information.', 'Feature generation based on variable area_cluster seems to be promising.', 'Feature generation based on the use of variable steering_type wouldn’t be useful, but the use of area_cluster seems to be promising.', 'Given the usual semantics of model variable, dummification would have been a better codification.', 'It is better to drop the variable steering_type than removing all records with missing values.', 'Not knowing the semantics of is_esc variable, dummification could have been a more adequate codification.'] car_insurance_class_histogram.png;A bar chart showing the distribution of the target variable is_claim.;['Balancing this dataset would be mandatory to improve the results.'] car_insurance_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] car_insurance_histograms_numeric.png;A set of histograms of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['All variables, but the class, should be dealt with as numeric.', 'The variable age_of_car can be seen as ordinal.', 'The variable height can be seen as ordinal without losing information.', 'Variable displacement is balanced.', 'It is clear that variable displacement shows some outliers, but we can’t be sure of the same for variable age_of_car.', 'Outliers seem to be a problem in the dataset.', 'Variable displacement shows some outlier values.', 'Variable width doesn’t have any outliers.', 'Variable height presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for displacement and policy_tenure variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for length variable, dummification would be the most adequate encoding.', 'The variable age_of_car can be coded as ordinal without losing information.', 'Feature generation based on variable height seems to be promising.', 'Feature generation based on the use of variable age_of_car wouldn’t be useful, but the use of policy_tenure seems to be promising.', 'Given the usual semantics of age_of_policyholder variable, dummification would have been a better codification.', 'It is better to drop the variable gross_weight than removing all records with missing values.', 'Not knowing the semantics of displacement variable, dummification could have been a more adequate codification.'] heart_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition slope <= 1.5 and the second with the condition restecg <= 0.5.;['The variable slope discriminates between the target values, as shown in the decision tree.', 'Variable slope is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is lower than 75%.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The precision for the presented tree is lower than its specificity.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 0.'] heart_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] heart_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] heart_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] heart_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] heart_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] heart_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] heart_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 9 principal components would imply an error between 15 and 20%.'] heart_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables restecg or age can be discarded without losing information.', 'The variable trestbps can be discarded without risking losing information.', 'Variables cp and age are redundant, but we can’t say the same for the pair ca and trestbps.', 'Variables restecg and oldpeak are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable thalach seems to be relevant for the majority of mining tasks.', 'Variables cp and chol seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable age might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable restecg previously than variable slope.'] heart_boxplots.png;A set of boxplots of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['Variable thal is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable trestbps shows some outliers, but we can’t be sure of the same for variable restecg.', 'Outliers seem to be a problem in the dataset.', 'Variable chol shows some outlier values.', 'Variable restecg doesn’t have any outliers.', 'Variable restecg presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] heart_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'fbs', 'exang'].;['All variables, but the class, should be dealt with as numeric.', 'The variable sex can be seen as ordinal.', 'The variable sex can be seen as ordinal without losing information.', 'Considering the common semantics for fbs and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for sex variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable exang seems to be promising.', 'Feature generation based on the use of variable exang wouldn’t be useful, but the use of sex seems to be promising.', 'Given the usual semantics of sex variable, dummification would have been a better codification.', 'It is better to drop the variable exang than removing all records with missing values.', 'Not knowing the semantics of sex variable, dummification could have been a more adequate codification.'] heart_class_histogram.png;A bar chart showing the distribution of the target variable target.;['Balancing this dataset would be mandatory to improve the results.'] heart_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] heart_histograms_numeric.png;A set of histograms of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['All variables, but the class, should be dealt with as binary.', 'The variable chol can be seen as ordinal.', 'The variable age can be seen as ordinal without losing information.', 'Variable restecg is balanced.', 'It is clear that variable chol shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable age shows some outlier values.', 'Variable chol doesn’t have any outliers.', 'Variable ca presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for chol and age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for restecg variable, dummification would be the most adequate encoding.', 'The variable thal can be coded as ordinal without losing information.', 'Feature generation based on variable cp seems to be promising.', 'Feature generation based on the use of variable thalach wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of restecg variable, dummification would have been a better codification.', 'It is better to drop the variable trestbps than removing all records with missing values.', 'Not knowing the semantics of trestbps variable, dummification could have been a more adequate codification.'] Breast_Cancer_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition perimeter_mean <= 90.47 and the second with the condition texture_worst <= 27.89.;['The variable texture_worst discriminates between the target values, as shown in the decision tree.', 'Variable texture_worst is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that Naive Bayes algorithm classifies (A, not B), as M.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that Naive Bayes algorithm classifies (not A, B), as M.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that KNN algorithm classifies (not A, B) as M for any k ≤ 20.'] Breast_Cancer_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] Breast_Cancer_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] Breast_Cancer_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] Breast_Cancer_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] Breast_Cancer_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] Breast_Cancer_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] Breast_Cancer_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 10 and 30%.'] Breast_Cancer_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables symmetry_se or area_se can be discarded without losing information.', 'The variable perimeter_worst can be discarded without risking losing information.', 'Variables texture_worst and radius_worst are redundant, but we can’t say the same for the pair perimeter_worst and texture_se.', 'Variables texture_worst and perimeter_se are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable area_se seems to be relevant for the majority of mining tasks.', 'Variables symmetry_se and perimeter_mean seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable texture_se might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable texture_mean previously than variable perimeter_se.'] Breast_Cancer_boxplots.png;A set of boxplots of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['Variable perimeter_se is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable texture_mean shows some outliers, but we can’t be sure of the same for variable perimeter_se.', 'Outliers seem to be a problem in the dataset.', 'Variable texture_se shows a high number of outlier values.', 'Variable texture_mean doesn’t have any outliers.', 'Variable radius_worst presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Breast_Cancer_class_histogram.png;A bar chart showing the distribution of the target variable diagnosis.;['Balancing this dataset would be mandatory to improve the results.'] Breast_Cancer_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Breast_Cancer_histograms_numeric.png;A set of histograms of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['All variables, but the class, should be dealt with as numeric.', 'The variable perimeter_mean can be seen as ordinal.', 'The variable radius_worst can be seen as ordinal without losing information.', 'Variable texture_se is balanced.', 'It is clear that variable radius_worst shows some outliers, but we can’t be sure of the same for variable perimeter_worst.', 'Outliers seem to be a problem in the dataset.', 'Variable smoothness_se shows some outlier values.', 'Variable smoothness_se doesn’t have any outliers.', 'Variable texture_worst presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for perimeter_mean and texture_mean variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for area_se variable, dummification would be the most adequate encoding.', 'The variable smoothness_se can be coded as ordinal without losing information.', 'Feature generation based on variable perimeter_worst seems to be promising.', 'Feature generation based on the use of variable area_se wouldn’t be useful, but the use of texture_mean seems to be promising.', 'Given the usual semantics of perimeter_worst variable, dummification would have been a better codification.', 'It is better to drop the variable texture_mean than removing all records with missing values.', 'Not knowing the semantics of smoothness_se variable, dummification could have been a more adequate codification.'] e-commerce_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Prior_purchases <= 3.5 and the second with the condition Customer_care_calls <= 4.5.;['The variable Prior_purchases discriminates between the target values, as shown in the decision tree.', 'Variable Prior_purchases is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The accuracy for the presented tree is higher than 60%.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (A,B) as Yes for any k ≤ 1596.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (not A, B) as No for any k ≤ 3657.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (not A, B) as No for any k ≤ 1596.'] e-commerce_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] e-commerce_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] e-commerce_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] e-commerce_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] e-commerce_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] e-commerce_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] e-commerce_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 25%.'] e-commerce_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Discount_offered or Prior_purchases can be discarded without losing information.', 'The variable Customer_rating can be discarded without risking losing information.', 'Variables Customer_care_calls and Cost_of_the_Product are redundant.', 'Variables Prior_purchases and Cost_of_the_Product are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Discount_offered seems to be relevant for the majority of mining tasks.', 'Variables Weight_in_gms and Prior_purchases seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Cost_of_the_Product might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Discount_offered previously than variable Cost_of_the_Product.'] e-commerce_boxplots.png;A set of boxplots of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['Variable Discount_offered is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Customer_rating shows some outliers, but we can’t be sure of the same for variable Prior_purchases.', 'Outliers seem to be a problem in the dataset.', 'Variable Discount_offered shows some outlier values.', 'Variable Customer_rating doesn’t have any outliers.', 'Variable Prior_purchases presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] e-commerce_histograms_symbolic.png;A set of bar charts of the variables ['Warehouse_block', 'Mode_of_Shipment', 'Product_importance', 'Gender'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Warehouse_block can be seen as ordinal.', 'The variable Product_importance can be seen as ordinal without losing information.', 'Considering the common semantics for Mode_of_Shipment and Warehouse_block variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Mode_of_Shipment variable, dummification would be the most adequate encoding.', 'The variable Product_importance can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Warehouse_block wouldn’t be useful, but the use of Mode_of_Shipment seems to be promising.', 'Given the usual semantics of Warehouse_block variable, dummification would have been a better codification.', 'It is better to drop the variable Product_importance than removing all records with missing values.', 'Not knowing the semantics of Product_importance variable, dummification could have been a more adequate codification.'] e-commerce_class_histogram.png;A bar chart showing the distribution of the target variable ReachedOnTime.;['Balancing this dataset would be mandatory to improve the results.'] e-commerce_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] e-commerce_histograms_numeric.png;A set of histograms of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['All variables, but the class, should be dealt with as date.', 'The variable Weight_in_gms can be seen as ordinal.', 'The variable Weight_in_gms can be seen as ordinal without losing information.', 'Variable Customer_care_calls is balanced.', 'It is clear that variable Discount_offered shows some outliers, but we can’t be sure of the same for variable Customer_care_calls.', 'Outliers seem to be a problem in the dataset.', 'Variable Prior_purchases shows a high number of outlier values.', 'Variable Prior_purchases doesn’t have any outliers.', 'Variable Discount_offered presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Prior_purchases and Customer_care_calls variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Customer_care_calls variable, dummification would be the most adequate encoding.', 'The variable Customer_care_calls can be coded as ordinal without losing information.', 'Feature generation based on variable Discount_offered seems to be promising.', 'Feature generation based on the use of variable Discount_offered wouldn’t be useful, but the use of Customer_care_calls seems to be promising.', 'Given the usual semantics of Discount_offered variable, dummification would have been a better codification.', 'It is better to drop the variable Discount_offered than removing all records with missing values.', 'Not knowing the semantics of Cost_of_the_Product variable, dummification could have been a more adequate codification.'] maintenance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rotational speed [rpm] <= 1381.5 and the second with the condition Torque [Nm] <= 65.05.;['The variable Rotational speed [rpm] discriminates between the target values, as shown in the decision tree.', 'Variable Rotational speed [rpm] is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives reported in the same tree is 50.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 5990.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (A, not B) as 1 for any k ≤ 46.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 46.'] maintenance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] maintenance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] maintenance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] maintenance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] maintenance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] maintenance_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] maintenance_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 25%.'] maintenance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Process temperature [K] or Torque [Nm] can be discarded without losing information.', 'The variable Rotational speed [rpm] can be discarded without risking losing information.', 'Variables Air temperature [K] and Tool wear [min] seem to be useful for classification tasks.', 'Variables Rotational speed [rpm] and Process temperature [K] are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Tool wear [min] seems to be relevant for the majority of mining tasks.', 'Variables Torque [Nm] and Tool wear [min] seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Torque [Nm] might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Rotational speed [rpm] previously than variable Torque [Nm].'] maintenance_boxplots.png;A set of boxplots of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['Variable Process temperature [K] is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Rotational speed [rpm] shows some outliers, but we can’t be sure of the same for variable Torque [Nm].', 'Outliers seem to be a problem in the dataset.', 'Variable Tool wear [min] shows a high number of outlier values.', 'Variable Air temperature [K] doesn’t have any outliers.', 'Variable Tool wear [min] presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] maintenance_histograms_symbolic.png;A set of bar charts of the variables ['Type', 'TWF', 'HDF', 'PWF', 'OSF', 'RNF'].;['All variables, but the class, should be dealt with as date.', 'The variable TWF can be seen as ordinal.', 'The variable HDF can be seen as ordinal without losing information.', 'Considering the common semantics for PWF and Type variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Type variable, dummification would be the most adequate encoding.', 'The variable Type can be coded as ordinal without losing information.', 'Feature generation based on variable OSF seems to be promising.', 'Feature generation based on the use of variable RNF wouldn’t be useful, but the use of Type seems to be promising.', 'Given the usual semantics of OSF variable, dummification would have been a better codification.', 'It is better to drop the variable PWF than removing all records with missing values.', 'Not knowing the semantics of RNF variable, dummification could have been a more adequate codification.'] maintenance_class_histogram.png;A bar chart showing the distribution of the target variable Machine_failure.;['Balancing this dataset would be mandatory to improve the results.'] maintenance_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] maintenance_histograms_numeric.png;A set of histograms of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Rotational speed [rpm] can be seen as ordinal.', 'The variable Air temperature [K] can be seen as ordinal without losing information.', 'Variable Rotational speed [rpm] is balanced.', 'It is clear that variable Air temperature [K] shows some outliers, but we can’t be sure of the same for variable Torque [Nm].', 'Outliers seem to be a problem in the dataset.', 'Variable Torque [Nm] shows some outlier values.', 'Variable Air temperature [K] doesn’t have any outliers.', 'Variable Process temperature [K] presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Torque [Nm] and Air temperature [K] variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Torque [Nm] variable, dummification would be the most adequate encoding.', 'The variable Rotational speed [rpm] can be coded as ordinal without losing information.', 'Feature generation based on variable Rotational speed [rpm] seems to be promising.', 'Feature generation based on the use of variable Air temperature [K] wouldn’t be useful, but the use of Process temperature [K] seems to be promising.', 'Given the usual semantics of Rotational speed [rpm] variable, dummification would have been a better codification.', 'It is better to drop the variable Process temperature [K] than removing all records with missing values.', 'Not knowing the semantics of Tool wear [min] variable, dummification could have been a more adequate codification.'] Churn_Modelling_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 42.5 and the second with the condition NumOfProducts <= 2.5.;['The variable NumOfProducts discriminates between the target values, as shown in the decision tree.', 'Variable Age is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of True Positives reported in the same tree is 50.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 124.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as 1.'] Churn_Modelling_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] Churn_Modelling_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] Churn_Modelling_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] Churn_Modelling_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] Churn_Modelling_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] Churn_Modelling_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] Churn_Modelling_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 10 and 25%.'] Churn_Modelling_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables EstimatedSalary or NumOfProducts can be discarded without losing information.', 'The variable EstimatedSalary can be discarded without risking losing information.', 'Variables Age and CreditScore are redundant, but we can’t say the same for the pair Tenure and NumOfProducts.', 'Variables NumOfProducts and CreditScore are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable EstimatedSalary seems to be relevant for the majority of mining tasks.', 'Variables NumOfProducts and CreditScore seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Balance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Tenure previously than variable CreditScore.'] Churn_Modelling_boxplots.png;A set of boxplots of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['Variable Tenure is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Tenure shows some outliers, but we can’t be sure of the same for variable NumOfProducts.', 'Outliers seem to be a problem in the dataset.', 'Variable EstimatedSalary shows some outlier values.', 'Variable EstimatedSalary doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Churn_Modelling_histograms_symbolic.png;A set of bar charts of the variables ['Geography', 'Gender', 'HasCrCard', 'IsActiveMember'].;['All variables, but the class, should be dealt with as binary.', 'The variable IsActiveMember can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Gender and Geography variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for IsActiveMember variable, dummification would be the most adequate encoding.', 'The variable IsActiveMember can be coded as ordinal without losing information.', 'Feature generation based on variable IsActiveMember seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of Geography seems to be promising.', 'Given the usual semantics of Geography variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of Gender variable, dummification could have been a more adequate codification.'] Churn_Modelling_class_histogram.png;A bar chart showing the distribution of the target variable Exited.;['Balancing this dataset would be mandatory to improve the results.'] Churn_Modelling_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Churn_Modelling_histograms_numeric.png;A set of histograms of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Age can be seen as ordinal.', 'The variable EstimatedSalary can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable NumOfProducts shows some outliers, but we can’t be sure of the same for variable Tenure.', 'Outliers seem to be a problem in the dataset.', 'Variable EstimatedSalary shows some outlier values.', 'Variable Age doesn’t have any outliers.', 'Variable NumOfProducts presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for NumOfProducts and CreditScore variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Balance variable, dummification would be the most adequate encoding.', 'The variable CreditScore can be coded as ordinal without losing information.', 'Feature generation based on variable Age seems to be promising.', 'Feature generation based on the use of variable EstimatedSalary wouldn’t be useful, but the use of CreditScore seems to be promising.', 'Given the usual semantics of CreditScore variable, dummification would have been a better codification.', 'It is better to drop the variable EstimatedSalary than removing all records with missing values.', 'Not knowing the semantics of CreditScore variable, dummification could have been a more adequate codification.'] vehicle_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition MAJORSKEWNESS <= 74.5 and the second with the condition CIRCULARITY <= 49.5.;['The variable MAJORSKEWNESS discriminates between the target values, as shown in the decision tree.', 'Variable MAJORSKEWNESS is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is lower than 90%.', 'The number of False Negatives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The variable MAJORSKEWNESS seems to be one of the five most relevant features.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 4.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 3.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that KNN algorithm classifies (A,B) as 4 for any k ≤ 3.'] vehicle_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] vehicle_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] vehicle_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] vehicle_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] vehicle_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] vehicle_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 10 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 5 and 20%.'] vehicle_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables MAJORSKEWNESS or CIRCULARITY can be discarded without losing information.', 'The variable GYRATIONRADIUS can be discarded without risking losing information.', 'Variables CIRCULARITY and COMPACTNESS are redundant, but we can’t say the same for the pair MINORVARIANCE and MAJORVARIANCE.', 'Variables MINORVARIANCE and MINORKURTOSIS are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable MAJORVARIANCE seems to be relevant for the majority of mining tasks.', 'Variables MINORKURTOSIS and MINORSKEWNESS seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable MAJORKURTOSIS might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable MINORKURTOSIS previously than variable MAJORSKEWNESS.'] vehicle_boxplots.png;A set of boxplots of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['Variable COMPACTNESS is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable MINORSKEWNESS shows some outliers, but we can’t be sure of the same for variable MINORVARIANCE.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORKURTOSIS shows some outlier values.', 'Variable COMPACTNESS doesn’t have any outliers.', 'Variable CIRCULARITY presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] vehicle_class_histogram.png;A bar chart showing the distribution of the target variable target.;['Balancing this dataset would be mandatory to improve the results.'] vehicle_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] vehicle_histograms_numeric.png;A set of histograms of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['All variables, but the class, should be dealt with as date.', 'The variable MINORSKEWNESS can be seen as ordinal.', 'The variable GYRATIONRADIUS can be seen as ordinal without losing information.', 'Variable COMPACTNESS is balanced.', 'It is clear that variable MAJORSKEWNESS shows some outliers, but we can’t be sure of the same for variable MAJORVARIANCE.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORKURTOSIS shows a high number of outlier values.', 'Variable MINORSKEWNESS doesn’t have any outliers.', 'Variable CIRCULARITY presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for RADIUS RATIO and COMPACTNESS variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for MINORKURTOSIS variable, dummification would be the most adequate encoding.', 'The variable DISTANCE CIRCULARITY can be coded as ordinal without losing information.', 'Feature generation based on variable GYRATIONRADIUS seems to be promising.', 'Feature generation based on the use of variable MAJORSKEWNESS wouldn’t be useful, but the use of COMPACTNESS seems to be promising.', 'Given the usual semantics of GYRATIONRADIUS variable, dummification would have been a better codification.', 'It is better to drop the variable COMPACTNESS than removing all records with missing values.', 'Not knowing the semantics of MAJORSKEWNESS variable, dummification could have been a more adequate codification.'] adult_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition hours-per-week <= 41.5 and the second with the condition capital-loss <= 1820.5.;['The variable capital-loss discriminates between the target values, as shown in the decision tree.', 'Variable hours-per-week is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as >50K.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (A, not B) as >50K for any k ≤ 541.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (not A, B) as >50K for any k ≤ 21974.'] adult_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] adult_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] adult_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] adult_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] adult_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] adult_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] adult_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 15 and 30%.'] adult_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables fnlwgt or hours-per-week can be discarded without losing information.', 'The variable hours-per-week can be discarded without risking losing information.', 'Variables capital-loss and age are redundant.', 'Variables age and educational-num are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable capital-gain seems to be relevant for the majority of mining tasks.', 'Variables fnlwgt and age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable fnlwgt might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable capital-gain previously than variable fnlwgt.'] adult_boxplots.png;A set of boxplots of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['Variable hours-per-week is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable educational-num shows some outliers, but we can’t be sure of the same for variable fnlwgt.', 'Outliers seem to be a problem in the dataset.', 'Variable capital-loss shows a high number of outlier values.', 'Variable capital-gain doesn’t have any outliers.', 'Variable capital-gain presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] adult_histograms_symbolic.png;A set of bar charts of the variables ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'gender'].;['All variables, but the class, should be dealt with as date.', 'The variable gender can be seen as ordinal.', 'The variable education can be seen as ordinal without losing information.', 'Considering the common semantics for marital-status and workclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for marital-status variable, dummification would be the most adequate encoding.', 'The variable education can be coded as ordinal without losing information.', 'Feature generation based on variable marital-status seems to be promising.', 'Feature generation based on the use of variable occupation wouldn’t be useful, but the use of workclass seems to be promising.', 'Given the usual semantics of education variable, dummification would have been a better codification.', 'It is better to drop the variable relationship than removing all records with missing values.', 'Not knowing the semantics of occupation variable, dummification could have been a more adequate codification.'] adult_class_histogram.png;A bar chart showing the distribution of the target variable income.;['Balancing this dataset would be mandatory to improve the results.'] adult_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] adult_histograms_numeric.png;A set of histograms of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['All variables, but the class, should be dealt with as date.', 'The variable fnlwgt can be seen as ordinal.', 'The variable hours-per-week can be seen as ordinal without losing information.', 'Variable fnlwgt is balanced.', 'It is clear that variable educational-num shows some outliers, but we can’t be sure of the same for variable capital-loss.', 'Outliers seem to be a problem in the dataset.', 'Variable educational-num shows some outlier values.', 'Variable capital-loss doesn’t have any outliers.', 'Variable age presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for capital-gain and age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for fnlwgt variable, dummification would be the most adequate encoding.', 'The variable educational-num can be coded as ordinal without losing information.', 'Feature generation based on variable educational-num seems to be promising.', 'Feature generation based on the use of variable capital-loss wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of capital-gain variable, dummification would have been a better codification.', 'It is better to drop the variable hours-per-week than removing all records with missing values.', 'Not knowing the semantics of fnlwgt variable, dummification could have been a more adequate codification.'] Covid_Data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition CARDIOVASCULAR <= 50.0 and the second with the condition ASHTMA <= 1.5.;['The variable ASHTMA discriminates between the target values, as shown in the decision tree.', 'Variable ASHTMA is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 75%.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The recall for the presented tree is lower than 90%.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (not A, B) as Yes for any k ≤ 46.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A,B) as No for any k ≤ 7971.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A, not B) as Yes for any k ≤ 173.'] Covid_Data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] Covid_Data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] Covid_Data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] Covid_Data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] Covid_Data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] Covid_Data_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] Covid_Data_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 11 principal components would imply an error between 15 and 25%.'] Covid_Data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables HIPERTENSION or RENAL_CHRONIC can be discarded without losing information.', 'The variable MEDICAL_UNIT can be discarded without risking losing information.', 'Variables PREGNANT and TOBACCO are redundant, but we can’t say the same for the pair MEDICAL_UNIT and ASTHMA.', 'Variables COPD and AGE are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ICU seems to be relevant for the majority of mining tasks.', 'Variables HIPERTENSION and TOBACCO seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable COPD might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ICU previously than variable PREGNANT.'] Covid_Data_boxplots.png;A set of boxplots of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['Variable OTHER_DISEASE is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ASTHMA shows some outliers, but we can’t be sure of the same for variable COPD.', 'Outliers seem to be a problem in the dataset.', 'Variable AGE shows some outlier values.', 'Variable ASTHMA doesn’t have any outliers.', 'Variable OTHER_DISEASE presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Covid_Data_histograms_symbolic.png;A set of bar charts of the variables ['USMER', 'SEX', 'PATIENT_TYPE'].;['All variables, but the class, should be dealt with as date.', 'The variable PATIENT_TYPE can be seen as ordinal.', 'The variable USMER can be seen as ordinal without losing information.', 'Considering the common semantics for USMER and SEX variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PATIENT_TYPE variable, dummification would be the most adequate encoding.', 'The variable PATIENT_TYPE can be coded as ordinal without losing information.', 'Feature generation based on variable SEX seems to be promising.', 'Feature generation based on the use of variable PATIENT_TYPE wouldn’t be useful, but the use of USMER seems to be promising.', 'Given the usual semantics of PATIENT_TYPE variable, dummification would have been a better codification.', 'It is better to drop the variable PATIENT_TYPE than removing all records with missing values.', 'Not knowing the semantics of SEX variable, dummification could have been a more adequate codification.'] Covid_Data_class_histogram.png;A bar chart showing the distribution of the target variable CLASSIFICATION.;['Balancing this dataset would be mandatory to improve the results.'] Covid_Data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Covid_Data_histograms_numeric.png;A set of histograms of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['All variables, but the class, should be dealt with as numeric.', 'The variable TOBACCO can be seen as ordinal.', 'The variable MEDICAL_UNIT can be seen as ordinal without losing information.', 'Variable ICU is balanced.', 'It is clear that variable RENAL_CHRONIC shows some outliers, but we can’t be sure of the same for variable ICU.', 'Outliers seem to be a problem in the dataset.', 'Variable OTHER_DISEASE shows some outlier values.', 'Variable MEDICAL_UNIT doesn’t have any outliers.', 'Variable PREGNANT presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for COPD and MEDICAL_UNIT variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ASTHMA variable, dummification would be the most adequate encoding.', 'The variable PREGNANT can be coded as ordinal without losing information.', 'Feature generation based on variable ICU seems to be promising.', 'Feature generation based on the use of variable PNEUMONIA wouldn’t be useful, but the use of MEDICAL_UNIT seems to be promising.', 'Given the usual semantics of HIPERTENSION variable, dummification would have been a better codification.', 'It is better to drop the variable PREGNANT than removing all records with missing values.', 'Not knowing the semantics of COPD variable, dummification could have been a more adequate codification.'] sky_survey_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition dec <= 22.21 and the second with the condition mjd <= 55090.5.;['The variable mjd discriminates between the target values, as shown in the decision tree.', 'Variable mjd is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 75%.', 'The number of False Negatives is higher than the number of True Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], the Decision Tree presented classifies (A, not B) as QSO.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], the Decision Tree presented classifies (A, not B) as QSO.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that KNN algorithm classifies (A,B) as GALAXY for any k ≤ 945.'] sky_survey_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] sky_survey_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] sky_survey_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] sky_survey_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] sky_survey_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] sky_survey_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 15 and 30%.'] sky_survey_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables redshift or plate can be discarded without losing information.', 'The variable camcol can be discarded without risking losing information.', 'Variables run and ra are redundant, but we can’t say the same for the pair mjd and dec.', 'Variables run and redshift are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable dec seems to be relevant for the majority of mining tasks.', 'Variables camcol and mjd seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable ra might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable camcol previously than variable mjd.'] sky_survey_boxplots.png;A set of boxplots of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['Variable plate is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable field shows some outliers, but we can’t be sure of the same for variable ra.', 'Outliers seem to be a problem in the dataset.', 'Variable field shows some outlier values.', 'Variable field doesn’t have any outliers.', 'Variable redshift presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] sky_survey_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] sky_survey_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] sky_survey_histograms_numeric.png;A set of histograms of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['All variables, but the class, should be dealt with as date.', 'The variable run can be seen as ordinal.', 'The variable field can be seen as ordinal without losing information.', 'Variable ra is balanced.', 'It is clear that variable camcol shows some outliers, but we can’t be sure of the same for variable mjd.', 'Outliers seem to be a problem in the dataset.', 'Variable redshift shows a high number of outlier values.', 'Variable field doesn’t have any outliers.', 'Variable plate presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ra and dec variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for field variable, dummification would be the most adequate encoding.', 'The variable camcol can be coded as ordinal without losing information.', 'Feature generation based on variable redshift seems to be promising.', 'Feature generation based on the use of variable camcol wouldn’t be useful, but the use of ra seems to be promising.', 'Given the usual semantics of ra variable, dummification would have been a better codification.', 'It is better to drop the variable redshift than removing all records with missing values.', 'Not knowing the semantics of plate variable, dummification could have been a more adequate codification.'] Wine_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Total phenols <= 2.36 and the second with the condition Proanthocyanins <= 1.58.;['The variable Proanthocyanins discriminates between the target values, as shown in the decision tree.', 'Variable Proanthocyanins is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A,B) as 3 for any k ≤ 60.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 60.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 49.'] Wine_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] Wine_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] Wine_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] Wine_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] Wine_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] Wine_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 10 principal components would imply an error between 15 and 30%.'] Wine_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Flavanoids or Hue can be discarded without losing information.', 'The variable Color intensity can be discarded without risking losing information.', 'Variables Color intensity and Alcohol are redundant, but we can’t say the same for the pair Flavanoids and Alcalinity of ash.', 'Variables Flavanoids and Total phenols are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Ash seems to be relevant for the majority of mining tasks.', 'Variables Alcalinity of ash and Malic acid seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Alcohol might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable OD280-OD315 of diluted wines previously than variable Total phenols.'] Wine_boxplots.png;A set of boxplots of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['Variable OD280-OD315 of diluted wines is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Nonflavanoid phenols shows some outliers, but we can’t be sure of the same for variable Color intensity.', 'Outliers seem to be a problem in the dataset.', 'Variable Hue shows some outlier values.', 'Variable Malic acid doesn’t have any outliers.', 'Variable OD280-OD315 of diluted wines presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Wine_class_histogram.png;A bar chart showing the distribution of the target variable Class.;['Balancing this dataset would be mandatory to improve the results.'] Wine_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] Wine_histograms_numeric.png;A set of histograms of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['All variables, but the class, should be dealt with as binary.', 'The variable Total phenols can be seen as ordinal.', 'The variable Alcohol can be seen as ordinal without losing information.', 'Variable Flavanoids is balanced.', 'It is clear that variable Color intensity shows some outliers, but we can’t be sure of the same for variable Total phenols.', 'Outliers seem to be a problem in the dataset.', 'Variable Alcalinity of ash shows some outlier values.', 'Variable Alcohol doesn’t have any outliers.', 'Variable Ash presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for OD280-OD315 of diluted wines and Alcohol variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for OD280-OD315 of diluted wines variable, dummification would be the most adequate encoding.', 'The variable Hue can be coded as ordinal without losing information.', 'Feature generation based on variable Malic acid seems to be promising.', 'Feature generation based on the use of variable Nonflavanoid phenols wouldn’t be useful, but the use of Alcohol seems to be promising.', 'Given the usual semantics of Total phenols variable, dummification would have been a better codification.', 'It is better to drop the variable Alcalinity of ash than removing all records with missing values.', 'Not knowing the semantics of Alcalinity of ash variable, dummification could have been a more adequate codification.'] water_potability_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Hardness <= 278.29 and the second with the condition Chloramines <= 6.7.;['The variable Hardness discriminates between the target values, as shown in the decision tree.', 'Variable Chloramines is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'The specificity for the presented tree is lower than 60%.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 1388.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (A, not B) as 1 for any k ≤ 6.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as 1.'] water_potability_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] water_potability_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] water_potability_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] water_potability_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] water_potability_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] water_potability_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] water_potability_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 30%.'] water_potability_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Hardness or Conductivity can be discarded without losing information.', 'The variable Turbidity can be discarded without risking losing information.', 'Variables Trihalomethanes and Hardness are redundant, but we can’t say the same for the pair Chloramines and Sulfate.', 'Variables Hardness and ph are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Turbidity seems to be relevant for the majority of mining tasks.', 'Variables Conductivity and Turbidity seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Hardness might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Turbidity previously than variable Chloramines.'] water_potability_boxplots.png;A set of boxplots of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['Variable Trihalomethanes is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Turbidity shows some outliers, but we can’t be sure of the same for variable Sulfate.', 'Outliers seem to be a problem in the dataset.', 'Variable ph shows some outlier values.', 'Variable Turbidity doesn’t have any outliers.', 'Variable Trihalomethanes presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] water_potability_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['ph', 'Sulfate', 'Trihalomethanes'].;['Discarding variable Trihalomethanes would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Sulfate seems to be promising.', 'It is better to drop the variable Trihalomethanes than removing all records with missing values.'] water_potability_class_histogram.png;A bar chart showing the distribution of the target variable Potability.;['Balancing this dataset would be mandatory to improve the results.'] water_potability_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] water_potability_histograms_numeric.png;A set of histograms of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['All variables, but the class, should be dealt with as date.', 'The variable Trihalomethanes can be seen as ordinal.', 'The variable Chloramines can be seen as ordinal without losing information.', 'Variable Turbidity is balanced.', 'It is clear that variable Chloramines shows some outliers, but we can’t be sure of the same for variable Trihalomethanes.', 'Outliers seem to be a problem in the dataset.', 'Variable Trihalomethanes shows a high number of outlier values.', 'Variable Turbidity doesn’t have any outliers.', 'Variable Sulfate presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ph and Hardness variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Turbidity variable, dummification would be the most adequate encoding.', 'The variable Conductivity can be coded as ordinal without losing information.', 'Feature generation based on variable Chloramines seems to be promising.', 'Feature generation based on the use of variable Conductivity wouldn’t be useful, but the use of ph seems to be promising.', 'Given the usual semantics of Chloramines variable, dummification would have been a better codification.', 'It is better to drop the variable Hardness than removing all records with missing values.', 'Not knowing the semantics of ph variable, dummification could have been a more adequate codification.'] abalone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Height <= 0.13 and the second with the condition Diameter <= 0.45.;['The variable Diameter discriminates between the target values, as shown in the decision tree.', 'Variable Diameter is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], the Decision Tree presented classifies (not A, B) as I.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A, not B) as M for any k ≤ 117.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (not A, not B) as M for any k ≤ 1191.'] abalone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] abalone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] abalone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] abalone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] abalone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] abalone_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 30%.'] abalone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables Whole weight or Length can be discarded without losing information.', 'The variable Whole weight can be discarded without risking losing information.', 'Variables Length and Height are redundant, but we can’t say the same for the pair Whole weight and Viscera weight.', 'Variables Diameter and Length are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Whole weight seems to be relevant for the majority of mining tasks.', 'Variables Whole weight and Length seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Rings might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Length previously than variable Height.'] abalone_boxplots.png;A set of boxplots of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['Variable Rings is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Shell weight shows some outliers, but we can’t be sure of the same for variable Viscera weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Rings shows a high number of outlier values.', 'Variable Shell weight doesn’t have any outliers.', 'Variable Shell weight presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] abalone_class_histogram.png;A bar chart showing the distribution of the target variable Sex.;['Balancing this dataset would be mandatory to improve the results.'] abalone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] abalone_histograms_numeric.png;A set of histograms of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['All variables, but the class, should be dealt with as date.', 'The variable Shucked weight can be seen as ordinal.', 'The variable Shucked weight can be seen as ordinal without losing information.', 'Variable Shell weight is balanced.', 'It is clear that variable Rings shows some outliers, but we can’t be sure of the same for variable Whole weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Viscera weight shows some outlier values.', 'Variable Diameter doesn’t have any outliers.', 'Variable Length presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Diameter and Length variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Length variable, dummification would be the most adequate encoding.', 'The variable Diameter can be coded as ordinal without losing information.', 'Feature generation based on variable Shucked weight seems to be promising.', 'Feature generation based on the use of variable Diameter wouldn’t be useful, but the use of Length seems to be promising.', 'Given the usual semantics of Viscera weight variable, dummification would have been a better codification.', 'It is better to drop the variable Shucked weight than removing all records with missing values.', 'Not knowing the semantics of Shell weight variable, dummification could have been a more adequate codification.']